Browsing articles tagged with " Security"

Podcast: NoSQL and PeopleTalkingTech

Sep 17, 2012   //   by Karen Lopez   //   Blog, Data, Database, NoSQL, Professional Development, Speaking  //  1 Comment

I recently talked with my good friend Denny Cherry (@mrdenny | blog) about my experience at the NoSQL Now! conference and working with NoSQL technologies.  Denny’s new podcast series is called People Talking Tech and he has other interesting topics and people coming up soon.

My comments focused on how at the NoSQL professionals understand that it means "Not Only SQL" and can’t mean "No SQL" and have much of a future.  Using the right tool for the right job.  Cost, benefit, risk and all.

One of the things we talked about on the closing panel is "how do you find somebody that is a good architect who can tell you which types of technologies you can use for which use cases…"

Even though many people talk about NoSQL needing no architecture, we still need people to help choose when and what NoSQL technologies to use.  Seems to me that having experience working hands-on with relational and NoSQL technologies is going to be hugely valuable in the next couple of years.  If you have relational experience, now is the time to start learning about non-relational ones.

Securing SQL Server, Second Edition: Protecting Your Database from AttackersBy the way, we talked a bit about database security.  Denny’s new edition of his book Securing SQL Server, Second Edition: Protecting Your Database from Attackers has recently been released.  Check it out.

Stolen Laptop Affects 34k Patients–Can You Spot the Problem?

Apr 3, 2012   //   by Karen Lopez   //   Blog, Data, Data Breach  //  No Comments

A recent CMIO post describes the data breach of 34,000 patients’ personally identifiable information.

A former contractor’s personal laptop containing patient information was stolen, according to a statement from Larry Warren, CEO of the hospital. “This information was downloaded in violation of Howard University Hospital policy,” he wrote.

I’ll give you 30 seconds to spot 3 problems with the situation.  Tick, Tock.

I can see three especially worrisome problems:

  • Information was downloaded in violation:  I’m guessing that there was no monitoring of downloads of sensitive data at this medical institution.  This sort of monitoring may have prevented this data from leaving the building.
  • Former contractor:  So a person who had access to this sensitive data was allowed to leave the organization with it. I personally refuse to put data such as this on my own devices, mainly because I do not want the liability of having to protect it or report it if something were go wrong.  I am usually the only person on the project who refuses.  However, I have never even been asked or reminded about removing any company data from any of my storage devices when I go on to other projects.
  • Personal Laptop:  I sometimes use my own equipment when working at a client and that is normally due to the fact that client systems are often less powerful than my own and they don’t have licenses for tools that I need to do my job..  But I’d rather use systems that have enterprise-class security, encryption and monitoring.  I wish more corporate systems supports such practices.

2010-10-01 22.19.19Since the article did not mention that the data was encrypted, I’m guessing it wasn’t.  I’m also wondering why this ever got reported…most former consultants would not do so, I’m guessing, if they had the data in violation.  Perhaps the laptop was recovered and the breach was reported that way.

I’ve previously blogged about how poorly medical data is protected.

This sort of data breach makes me mad. It’s nice that the hospital says that they are now “implementing enhanced security measures”, but why didn’t they do that before? Did their compliance officer recommend it but management said “no, too expensive”?  Did their DBA say “the database is encrypted, so we are covered”?  Did the former contractor take the data maliciously? Did he have to put it on his personal laptop? Why do we continue to treat data as if it is someone else’s problem to manage?  Do we not understand that we have a professional obligation to protect patient data?  Even with legislation it seems the message still isn’t making it through to everyone.

Does your organization have security monitoring in place to protect patient or customer data?  If it doesn’t, have you recommended that it do so?  Go do it, now.

How Safe is Your Medical Data? You Don’t Want to Know…

Feb 18, 2012   //   by Karen Lopez   //   Blog, Data, Data Breach  //  1 Comment

So you live in a country that has legislation requiring your health data to be protected and you believe it’s all safe.  If you live in the US, think again.


According to a study by Ponemon Institute sponsored by MegaPath:

  • 91% of small healthcare organizations (think your local doctor, dentist, optometrist or clinic) had experienced a breach of protected health information (PHI) in the previous 12 months; of those, 29% resulted in medical identity theft
  • 52% of small healthcare providers rated their security technology plans as ineffective
  • 43% had experienced medical identity theft in their organizations
  • 55% of respondents had to notify patients of a data breach in the previous 12 months
  • On average, less than 10% of the respondents’ IT budgets are spent on security

You can register and download the entire paper at

I found this table the most interesting discouraging:


From a data governance and data protection point of view, I’d really expect to see ALL of those be 100%.   My doctor recently moved to mostly electronic health records (as have most in my province), but I’m wondering what his answers to all of these questions would be.  When I think about the 91% data breach numbers, I see this table as one of the key reasons that number is so high.

Even if you aren’t in a health-related organization, I’d expect your numbers to be higher.  63% backup and disaster recovery plans? How can we call ourselves professionals when this is life-critical information?  Ultimately it is organization leadership who are responsible for protecting data.  But I’ve always been concerned about how far we data professionals should go in ensuring that the public is protected from harm when data polices and practices are not sufficient.  Should we not move to other projects? Report bad practices?  To whom?

This is a US-based study and I’m curious about similar numbers in other countries with and without health data privacy legislation.  If you have links to other sources, please provide them in the comments.

The Repository, Users and Licenses – Clarifying ER/Studio Services

Feb 14, 2011   //   by Karen Lopez   //   Blog, Data Modeling  //  No Comments

In working with Embarcadero ER/Studio (currently branded as ER/Studio XE), I often find that accidental system admins are confused by what appear to be duplicate or overlapping set up steps to get someone up and running with ER/Studio and the ER/Studio Repository.  This is especially confusing if the admin does this only a few times a year or does not have a documented process for setting up accounts.

Before getting into the details, let’s first define some of the terms for this how-to:

imageEmbarcadero License Server: An application that runs on a server for managing the allocation of licenses for users of the ER/Studio suite of tools, including ER/Studio Data Architect, ER/Studio Business Architect, ER/Studio Software Architect, and Schema Validator. If you have concurrent licensing for ER/Studio, you must use the License Server to hand out licenses and keep track of how many are being used.  Other licensing schemes may or may not make use of the License Server.

imageER/Studio Data Architect: The ER/Studio data modeling client application.  This application runs on your desktop/laptop.  Also known as the ER/Studio client.  This is the data modeling tool.

imageER/Studio Repository: An application that runs on a server for managing versioning and releases of ER/Studio Data and Process models.  This is the service that allows you to check out a model and check it back in.



While the ER/Studio client can be deployed without these other services, enterprises typically deploy both the License Server and the Repository to help manage the complexity of enterprise projects.  However, to most modelers, it’s all magic to them: they run their local copy of ER/Studio and all these services work together seamlessly.

So when a user needs to be set up, they may need something to be set up in two locations:

  1. The License Server
  2. The Repository

And that’s where the confusion sets in, as it can appear they are having to set up users in duplicate.  It is even more confusing because organizations can customize how they manage licenses (letting anyone who has network to use a license, limiting licenses to specific machines or login IDs, etc.)  So I’ll leave the exact set up steps for another post. In my diagram above, that license server uses a concurrent user list to manage which machine logins (not ER/Studio logins, but Windows User Accounts)  are allowed to use a license.  Your licensing scheme may be different. 

The reason why users, accounts, or machines must be set up in the License server is that this services provides licenses for each of the ER/Studio suite of tools, not just ER/Studio Data Architect.   It’s also because an organization could deploy ER/Studio clients without the Repository or without the License Server.  They are two different services that can be used independently from each other.

Once an ER/Studio client application has obtained a license, a user can start working with models.  If that user is going to access the Repository, they need to have a login and a password in the Repository application.  Note that this is different than any data that was provided to the License Server.

To set up a Repository Account in ER/Studio Data Architect:

  1. Run ER/Studio client.
  2. Choose Repository from the menu bar, then Security and Security Center


  3. You will be presented with the Security Center window.  Click on the Manage Users Tab.


  4. Add the new User here.  You will want to use your organization’s standards for login IDs and passwords.
  5. Once a User has been added, he can be assigned access to certain projects and diagrams via roles.


Once you have completed BOTH allowing the user to get a license (via the License Server) and set them up with a login (User Account) in ER/Studio Data Architect, they should be ready to work.

Technology Is Not the Final Answer….

Nov 26, 2010   //   by Karen Lopez   //   Data, Database  //  4 Comments

Every year Infosecurity performs a security-related experiment.  They ask office workers questions about their passwords, where they work, what they do…then ask for their actual password.  A shocking number of people hand it right over.

OK, so here’s the question: Exactly how ignorant are they? The experiment found that out of 576 people questioned this year, 21% were quite happy to reveal their passwords in exchange for candy.

But maybe some of the dire news of late is sinking in, because that number is a heck of a lot lower than when the same experiment was conducted last year. Back then, a whopping 64% of the respondents were willing to give away their passwords. It seems that users have never paid attention to their mother’s advice about strangers and candy.

A curious aspect of the results was that, of those willing to trade away their passwords, women were 4.5 times more likely to spill the beans then men. Even more astounding was that 61% of all people surveyed happily revealed their date of birth!

This stuff drives me crazy.  I see people handing over personal data all the time in stores in exchange for a free t-shirt or even a free sample of something.   I always chalked this up to naiveté, but I can point to my own derivative experiment based on the Infosecurity one.  When the results are announced each year, I bring this up at work with my IT peers.  Usually 80% of my co-workers are willing to tell me enough about their passwords for me to guess or find out what it is (“My password is always my girlfriend’s birthday, so I never forget it” or “I always use Star Wars, but spelled with a Z instead of an S.”) without my even asking. I’d also say 9 times out of 10, talk turns to passwords for the non-user accounts, say the SA password for a production SQL Server.  For some reason, all sense of security of this information goes out the door as the password is almost always mentioned.  I’ve always wondered if this is because workers don’t value these non-personal resources as much as they do their own browser history, e-mail, and YouTube ratings.

I remember meeting with a potential financial advisor for a very large financial institution. Our talk turned to passwords and I told him about the study where people would hand over their passwords for the most trivial of treats.  He rolled his eyes and then said how stupid IT professionals are to require these. I mentioned that I was an IT professional and that strong passwords were the best defence against data theft and fraud.  He then proceeded to talk about all the new online systems that his company was foisting upon him and his clients.  And, of course, then he proceeded to tell us what his login and passwords were and why they were so easy to remember.  I sat their in stunned silence.  His giving out this information was not a great selling point for me for his services.  After having bragged about managing millions and millions of dollars of portfolios for some very famous people, then telling me his login credentials, he had basically showed me he could not be trusted with my data or my finances.  Needless to say, he did not get my business.

And what is this “women were 4.5 times more likely” to fall for this scheme?  Are we females really that clueless?  Is it that we avoid confrontation or have been raised to never say “no” when asked for a favor?  That number bothers me.  The Register believes it is because women love chocolate more than security.

I remember another conversation with a budding IT professional.  He had been talking to our intern about how secure the newest encryption technology was and how absolutely unbreakable it was.  As a sage (old) IT pro, I had to break the news to both the intern and the IT-wannabe that the encryption technology was useless in an age of social engineering and corporate cluelessness.  Both were flabbergasted that I could possibly question the value of what was probably 32-bit encryption at the time.  They both spouted off mathematical certainties of how many billions of years it would take to crack the code of highly secure encryption.  I tried to explain to them that technology was not the issue most of the time.  The both rolled their eyes and said that I just couldn’t understand how big the numbers were.

So I dragged our IT-wannabe over to the assistant to the CIO’s desk and lifted up her keyboard to show him the Post-It note with all the CIOs logins and passwords.  He objected that the list of what were obviously user names and passwords could be anything.  Then I took him over to the DBA set of cubicles and showed him how the whiteboard outside their cubes contained mysterious pairs of what were obvious user names and passwords.  He still didn’t believe me.  So he asked the admin assistant the next day how she kept track of all the logins and she showed him that she wrote them down on a Post-It and stuck it under her keyboard.  Then he asked the DBAs if those were credentials on the whiteboard, and they first denied it, then admitted it.  He chalked this up to clueless IT people.  So I walked with him back to his cube, and pointed out that he kept his own password on a Post-It note stuck on the side of his monitor.   Cluelessness, indeed.

Some days I feel as if all the work we put into data governance, information quality, and information security is for naught.  Why bother if no one values the data in the first place? Why don’t business uses and IT caretakers love their data?

I believe that we data management professionals must hold ourselves to a higher standard that what we see in the rest of the world.  We can go on and on about data quality, information integrity, and information protection.  But if we are giving out passwords right and left, writing passwords on whiteboards, and generally following terrible security practices, how are we ever going to convince the business that they need to treat the data better than we do?

Your thoughts?  Your observations?

Required Reading: TOP 25 Most Dangerous Programming Errors

Feb 1, 2009   //   by Karen Lopez   //   Blog, Compliance and Regulation, Data, Data Breach  //  No Comments

Error ISS Trainin Module SCAM-CE GHF-CE

The SANS Institute and the Common Weakness Enumeration (CWE) project released last week a list of the top 25 programming errors.  This resource, which lists the error and the project phases/tools/processes to which they apply, should be required reading, on a regular basis, by all team members on a development project. While this page refers to programming errors, I believe this is a great checklist of development errors, as some of them apply to architectural and methodological issues.

SANS Institute – CWE/SANS TOP 25 Most Dangerous Programming Errors

Experts Announce Agreement on the 25 Most Dangerous Programming Errors – And How to Fix Them
Agreement Will Change How Organizations Buy Software.

Project Manager: Bob Martin, MITRE

(January 12, 2009) Today in Washington, DC, experts from more than 30 US and international cyber security organizations jointly released the consensus list of the 25 most dangerous programming errors that lead to security bugs and that enable cyber espionage and cyber crime. Shockingly, most of these errors are not well understood by programmers; their avoidance is not widely taught by computer science programs; and their presence is frequently not tested by organizations developing software for sale.

The impact of these errors is far reaching. Just two of them led to more than 1.5 million web site security breaches during 2008 – and those breaches cascaded onto the computers of people who visited those web sites, turning their computers into zombies.

Even in 2009 I am constantly struggling with getting vendors and my own developers to acknowledge the importance of dealing with these issues.  As a project manager, I’m the one ultimately responsible for ensuring that delivered systems will do no harm, but that’s one of the hardest parts of my jobs.  Why?

  • Most of my newer developers have never received any formal education, training, or testing on many of these issues.
  • Many vendors rely on customer requests or customer production testing to identify these errors. 
  • Most packages, with anti-reverse engineering clauses in their terms of use, forbid inspecting code for these vulnerabilities.
  • Business users often don’t understand the short and longer term implications of neglecting these professional issues…nor should they have to.  But since we don’t have a "building code" or standards of practice in IT, we architects and project managers have no external authority to fall back on when users want to cut the security and protection steps of a project.
  • Many people still naively cling to the belief that the tools they use automatically protect them from these weaknesses.

Of particular interest to those of us working in the data and information responsibilities of a project are these development errors:

CWE-20: Improper Input Validation

It’s the number one killer of healthy software, so you’re just asking for trouble if you don’t ensure that your input conforms to expectations…MORE >>

I am constantly asked to allow the programmers to research and implement the validation rules for input data, since this cuts down on the amount of analysis needed and allows coders to get coding faster….and it always leads to less than acceptable validation, as coders don’t have time to go research the data — they need to be coding.  It’s a vicious circle.

CWE-89: Failure to Preserve SQL Query Structure (aka ‘SQL Injection’)

If attackers can influence the SQL that you use to communicate with your database, then they can…MORE >>

This involves using the lowest level of authority required to get the job done, among other things.  Yet developers usually want to develop, test, and deploy while using administrator-level authority.  Code should not be tested while running under administrative authority since it should not be deployed that way, either.  It is amazing to me how many people tell me they *must* have the SA password in order to code.  They may need some administrative-like rights, but no-one needs the SA account to develop code.  Not even DBAs.

I work with a few vendors who tell me that their packaged application must run under the SA account and the Windows Administrator, in production.  No amount of discussion with their "lead developer" will change their minds. It’s pure laziness and cluelessness to design a product that requires these rights. I have convinced many a client to replace software (and therefore vendors) that require this type of authorities.

I find this list to be of sufficient importance that I’m recommending that teams schedule a specific effort to review, discuss, and create an action plan for addressing these items.

So go pour yourself a coffee/tea/cola/water and start reading.  Your customers will thank you.


Subscribe via E-mail

Use the link below to receive posts via e-mail. Unsubscribe at any time. Subscribe to by Email