I was preparing for my webinar tomorrow for Idera when I decided to look up how long ER/Studio Data Architect has been around. I was happy to see that the press release for ER/1 (what it was called before they got in a bit of a trademark issue with ERwin* folks) that it was released on 15 March 2006.
I started using ER/1 not too long after that.
Some Interesting ER/Studio Trivia
- ER/1 listed for $1399 a seat, but there was a special deal for a few months to get it for $899.
- It could handle “hundreds of entities”
- It did not feature bi-directional updating of Logical to Physical
- It did not yet feature on diagram editing
- You can still download the Documentation for ER/1 1.0
- it supported:
- Oracle 7
- Sybase 11 and 10
- Microsoft SQL Server 6
- SQL Anywhere
- SQL Base
- “ER/1 can x-ray your databases and extract their structure” < Love this.
- It followed IDEF1X methodology adopted as part of the Federal Information Processing Standards
- Submodelling (Subject areas diagramming) was not supported yet.
- There was a separate product ER/1 for Borland Interbase
March 15, 1996
Embarcadero Technologies Ships ER/1 Data Modeling Tool
San Francisco, CA, March 15, 1996, Embarcadero Technologies today announced the general availability of ER/1, a new visual, entity-relationship modeling tool. ER/1 supports all major SQL database platforms, including Oracle7, Sybase 11 and 10, Microsoft SQL Server 6, Informix, DB2, SQL Anywhere, Watcom and SQL Base.
ER/1 delivers a slew of features that promote high-quality, functionally correct data models as well as unparalleled power, ease-of use and value. Its highly customizable design allows you to create visually appealing diagrams with such tools as dockable toolbars, diagram zooming, and print scaling. Powerful inheritance logic is built into ER/1 providing referential integrity throughout your data model. In addition, ER/1 provides you with the following major features to facilitate the creation of both logical and physical designs:
Accurate and Quick Reverse Engineering
ER/1 x-rays your databases and extracts their structure into entity-relationship diagrams capturing the complete definition of your tables, including constraints, primary keys, foreign keys, indexes, table and column comments and all table dependencies.
Automatic Database Builds
ER/1 uses an ODBC connection to create a physical implementation of the logical database design you created in ER/1. This one-step process involves the creation of tables, indexes, triggers, stored procedures, views, defaults, rules and user datatypes and properly orders the creation of these objects to eliminate dependency errors.
This feature promotes code-reuse by providing a central repository to store rules, defaults, and user-defined datatypes. Once you establish a business rule as a Data Dictionary object, it is re-usable throughout your diagram. In addition, the Data Dictionary supports global updates of
these objects. Just make the change once in the dictionary and ER/1 automatically propagates these changes throughout your diagram.
ER/1 offers the most comprehensive reporting of any data modeling tool. It completely documents both your logical and physical designs and generates professionally formatted and structured reports at the summary or detail level.
Code Generation for Team Development
ER/1 can write SQL source code files ready for version control and team development. To facilitate team programming, you can generate separate source code files.
ER/1 for Windows 95 and Windows NT is priced at $1399 per user. Through April 30, 1996, Embarcadero Technologies is offering a special introductory price of only $899 per user.
About Embarcadero Technologies, Inc.:
Embarcadero Technologies is a software products company specializing in tools to design, create, administer, query, program and monitor Oracle, Sybase, Microsoft, and Informix databases. Embarcadero offers a suite of products marketed to corporate customers and database professionals worldwide and has rapidly become the leading provider of database administration tools for Sybase and Microsoft SQL Server. Embarcadero’s software has been recognized for excellence with outstanding independent product reviews conducted by PC Week, DBMS, Microsoft BackOffice Magazine and Databased Advisor.
Data Modeling Tools are Experienced
One of the reasons why some people find data modeling tools overwhelming is that they’ve been around for more than 20 years. That’s a long time for these tools to get more customized, more feature-rich, more complex.
I should give a shout out to Greg Keller, who was the product manager during the time I started using ER/Studio.
So happy birthday, Embarcadero…I mean…Idera…ER/1….ER/Studio. I’m going to have a cupcake in your honor! Maybe twenty.
*Say “ER One” Then say “ER WIN”. Yeah, almost a SOUNDEX trademark issue.
I’m visiting Dallas this week to speak at the North Texas SQL Server User Group this Thursday. I’ll be speaking about keys: primary keys, surrogate keys, clustered keys, GUIDs, SEQUENCEs, alternate keys…well, there’s a lot to cover about such a simple topic. The reason I put this presentation together is I see a lot of confusion about these topics. Some of it’s about terminology (“I can’t find anything about alternate keys in SQL Server…what the heck is that, anyway”), some of it is misunderstandings (“what do you mean IDENTITIES aren’t unique! of course they are…they are primary keys!”), some of it is just new (“Why the heck would anyone want to use a SEQUENCE?”).
We’ll be chatting about all these questions and more on Thursday, 17 March at the Microsoft venue in Irving, Texas starting at 6PM.
Attendance is free, but you need to register at http://northtexas.sqlpass.org/ to help organizers plan for the event.
Don’t worry if you don’t know about SQL Server or don’t use it: this presentation will focus on some SQL Server specific features, but the discussion is completely portable to other DBMSs.
So many of us have learned database design approaches from working with one database or data technology. We may have used only one data modeling or development tool. That means our vocabularies around identifiers and keys tend to be product specific. Do you know the difference between a unique index and a unique key? What about the difference between RI, FK and AK? These concepts span data activities and it’s important that your team understand each other and where they, their tools and approaches need to support these features. We’ll look at the generic and proprietary terms for these concepts, as well as where they fit in the database design process. We’ll also look at implementation options in SQL Server and other DBMSs.
Hope to see you there!
CA has completed the sale of the ERwin data modeling business to Parallax Capital Partners, a private equity firm with an exceptional track record of transitioning divisions, subsidiaries and product lines into successful stand-alone entities.
The transaction, which closed on February 29, is a win-win scenario that was carefully designed to ensure mutual value and a seamless transition for customers, partners, and each of the approximately 60 ERwin employees worldwide. This move also aligns with our global partner strategy, which is an important component to CA’s growth model.
With this divestiture, ERwin is an independent company that will continue to be led by its current management team.
Parallax Capital is a private equity firm that specializes in lower middle market (between $5 and $100 million) software companies. In looking at their current portfolio, I recognize only a couple of companies, with Micro Focus being the one that I recognized instantly, but they sold that in the early 2000s. Parallax owns a diverse set of companies, so I’m not sure where they will go with the ERwin Modeling product set.
What I do know is that CA was clear after the failed Embarcadero purchase attempt that they were still intending to sell off ERwin, so a purchase is important to the ERwin user market. I have no other information and expect that initial communications will be that everything is remaining the same until it changes.
This quote: “This move also aligns with our global partner strategy, which is an important component to CA’s growth model. “ appears to imply that CA did not consider data modeling a growth area of the enterprise software business. As sad as that is, I agree.
My initial feelings are that having the ERwin business owned by an entity that does not own a competing product is likely best for customers. Competition is good, for technical quality, innovation and pricing.
UPDATE: a new, more upbeat announcement has gone up on ERwin.com http://erwin.com/resources/news/erwin-divested-from-ca-technologies/
What do you think the impact of this sale will be on you and the data modeling market?
One of the most clichéd blogging tricks is to declare something popular as dead. These click bait, desperate posts are popular among click-focused bloggers, but not for me. Yet here I am, writing an “is dead” post. Today, this is about sharing my responses on-going social media posts. They go something like this:
OP: No one loves my data models any more.
Responses: Data modeling is dead. Or…data models aren’t agile. Or…data models died with the waterfalls. Or…only I know how to do data models and all of you are doing it wrong, which is why they just look dead.
I bet I’ve read that sort of conversation at least a hundred times, first on mailing lists, then on forums, now on social media. It has been an ongoing battle for modelers since data models and dirt were discovered…invented…developed.
I think our issues around the love for data modeling, and logical data models specifically, is that we try to make these different types of models be different tasks. They aren’t. In fact, there are many types, many goals, and many points of view about data modeling. So as good modelers, we should first seek to understand what everyone in the discussion means by that term. And what do you know, even this fact is contentious. More on that in another post.
I do logical data modeling when I’m physical modeling. I don’t draw a whole lot of attention to it – it’s just how modeling is done on my projects.
Data Modeling is Dead Discussion
One current example of this discussion is taking place right now over on LinkedIn. Abhilash Gandhi posted:
During one of my project, when I raised some red flags for not having Logical Data Model, I was bombarded with comments – “Why do we need LDM”? “Are you kidding”? “What a waste of time!". The project was Data Warehouse with number of subject areas; possibility of number of data marts.
I have put myself into trouble by trying to enforce best practices for Data Modeling, Data Definitions, Naming Standards, etc. My question, am I asking or trying to do what may be obsolete or not necessary? Appreciate your comments.
There are responses that primarily back up the original poster’s feelings of being unneeded on modern development projects. Then I added another view point:
I’ll play Devil’s advocate here and say that we Data Architects have also lost touch with the primary way the products of our data modeling efforts will be used. There are indeed all kinds of uses, but producing physical models is the next step in most. And we have lost the physical skills to work on the physical side. Because we let this happen, we also have failed to make physical models useful for teams who need them.
We just keep telling the builders how much they should love our logical models, but have failed to make the results of logical modeling useful to them.
I’ve talked about this in many of my presentations, webinars (sorry about the autoplay, it’s a sin, I know) and data modeling blog posts. It’s difficult to keep up with what’s happening in the modern data platform world. So most of us just haven’t. It’s not that we need to be DBAs or developers. We should, though, have a literacy level of the features and approaches to implementing our data models for production use. Why? I addressed that as well. Below is an edited version of my response:
We Don’t All Have to Love Logical Data Modeling
First of all, the majority of IT professionals do not need to love an LDM. They don’t even need to need them. The focus of the LDM is the business steward/owner (and if i had my way, the customer, too). But we’ve screwed up how we think of data models as artefacts that are "something done on an IT project". Sure, that’s how almost all funding gets done for modeling, and it’s broken. But it’s also the fact of life for the relatively immature world of data modeling.
We literally beat developers and project managers with our logical data modeling, then ask them “why don’t you want us to produce data models?” We use extortion to get our beautiful logical data models done, then sit back an wonder why everyone sits at another lunch table.
I don’t waste time or resources trying to get devs, DBAs or network admins to love the LDMs. When was the last time you loved the enterprise-wide AD architecture? The network topology? The data centre blueprints and HVAC diagrams?
Data Models form the infrastructure of the data architecture, as do conceptual models and all the models made that would fill the upper rows of the Zachman Framework. We don’t force the HVAC guys to wait to plan out their systems until a single IT application project comes along to fund that work. We do it when we need a full plan for a data centre. Or a network. Or a security framework.
But here we are, trying to whip together an application with no models. So we tell everyone to stop everything while we build an LDM. That’s what’s killing us. Yes, we need to do it. But we don’t have to do it in a complete waterfall method. I tell people I’m doing a data model. then I work on both an LDM and the PDM at the same time. The LDM I use to drive data requirements from business owners, the PDM to start to make it actually work in the target infrastructure. Yes, I LDM more at first, but I’m still doing both at the same time. Yes, the PDM looks an awful lot like the LDM at first.
Stop Yelling at the Clouds
The real risks we take is sounding like old men yelling at the clouds when we insist on working and talking like it is 1980 all over again. I do iterative data modeling. I’m agile. I know it’s more work for me. I’d love to have the luxury of spending six months embedded with the end users coming up with a perfect and lovely logical data model. But that’s not the project I’ve been assigned to. It’s not the team I’m on. To work against the team is a demand that no data modeling be done and that database and data integration be done by non-data professionals. You can stand on your side of the cubicle wall, screaming about how LDMs are more important, or you can work with the data-driving modeling skills you have to make it work.
When I’m modeling, I’m working with the business team drawing out more clarity of their business rules and requirements. I am on #TeamData and #TeamBusiness. When the business sees you representing their interests, often to a hostile third party implementer, they will move mountains for you. This is the secret to getting CDMs, LDMs, and PDMs done on modern development projects. Just do them as part of your toolkit. I would prefer to data model completely separately from everyone else. I don’t see that happening on most projects.
The #TeamData Sweet Spot
My sweet spot is to get to the point where the DBAs, Devs, QA analysts and Project Managers are saying "hey, do you have those database printouts ready to go with DDL we just delivered? And do you have the user ones, as well?" I don’t care what they call them. I just want them to call them. At that point, I know I’m also on #TeamIT.
The key to getting people to at least appreciate logical data models is to just do them as part of whatever modeling effort you are working on. Don’t say “stop”. Just model on. Demonstrate, don’t tell your teams where the business requirements are written down, where they live. Then demonstrate how that leads to beautiful physical models as well.
Logical Data Modeling isn’t dead. But we modelers need to stop treating it like it’s a weapon. Long Live Logical!
I had so many more questions I wanted to talk about during my recent State of the Union of Data Modeling 2016, but one hour goes by quickly when you have tools, industry, professionals, standards and user groups to cover. I’m interested in your observations and comments about these questions:
Has data modeling accomplished all it needs to? Are we just in the maintenance phase of data modeling as a practice and profession?
What industry trends (tools, processes, methods, economics, whatever) are impacting (positive or negative) data modeling the most today?
How has the cost of data modeling changed since 1980s?
How has the return on data modeling changed since the 1980s?
How has risk changed in data modeling since the 1980s?
Data Modeling tools have so much maturity of features in them today. But along with that prices have reflected those changes. How have the prices of enterprise data modeling tools impacted data modeling on enterprise projects?
Have you worked with any non-IDEF1x/IE data modeling notation recently?
Have you worked with any open source data modeling tools?
What new features/enhancements/changes would you like to see in data modeling tools? Processes? Notations?
Why haven’t we solved the “no one loves me or my models” problem more widely?
I’ll add my thoughts on these in the comments, but I’d like to hear your responses as well.
My friend Joey D’Antoni ( @jdanton | blog ) and I will be giving a workshop at NoSQLNow! about new database and datastore technologies like Hadoop, Neo4j, Cassandra, Vertica, Document DB, and others. This will be a fast-paced, demo-heavy, practical sessions for data professionals. We’ll talk about where a modern data architecture would best use these technologies and why it’s not an either/or question for relational solutions in a successful enterprise. And, as always, our goal is to make the time we spend fun and interactive. This session will be a great starting point for some other session on Monday that go into data modeling for NoSQL as well as for all the other in-depth, database-specific talks the rest of the week.
Sunday, April 17, 2016
We’ve been busy keeping relational data consistent, high quality, and available. But over the last few years, new database and datastore technologies have come to the enterprise with different data stories. Do we need all our data to be consistent everywhere? What does data quality mean for analytics? Will we need relational database?
Learn how traditional and new database technologies fit in a modern data architecture. We will talk about the underlying concepts and terminology such as CAP, ACID and BASE and how they form the basis of evaluating each of the categories of databases. Learn about graph, Hadoop, relational, key value, document, columnar, and column family databases and how and when they should be considered. We’ll show you demos of each.
Finally, we will wrap up with 7+ tips for working with new hybrid data architectures: tools, techniques and standards.
Use code “DATACHICK” to save:
$100 off for Tutorials Only + Seminar Only Registration and $200 off for Full Event, Conference+Tutorials, Conference +Seminar, and Conference Only Registration.
Super early registration ends 29 January, so take advantage of both discounts now (yes, they stack!).
Last year I participated in the first Data Field Day in San Jose. I’m honoured to be a delegate for the tenth Tech Field Day which follows the same format. On 3-5 February I’ll be in Austin, Texas visiting with vendors in the software, hardware and virtualization world. There will be twelve of us participating, along with our fearless host, Stephen Foskett ( @SFoskett ).
We will be visiting these vendors during TFD10:
At each vendor visit there will be livestreaming during their presentation and we will discuss their products and services, ask questions. You can follow that stream above. Delegates are known for their brutal honesty, their insight and even some fun observations.
You can also follow along on Twitter hashtag of #TFD10. You can also post your own questions for these session using that hashtag.
What I love about field days is the the mix of delegates with a wide background in business, tech, innovation, entrepreneurship and data. This breadth means that we, as a team, look at the technology and business with a variety of viewpoints. And you get to watch it all live.
BTW, the next Data Field Day is scheduled for 8-10 June. If you have products or services you’d like to present to a team of independent data experts, contact me.
I hope you can follow along. It’s a great chance to see real world tech innovation discussions.
Subscribe via E-mail
- September 2016
- August 2016
- June 2016
- May 2016
- April 2016
- March 2016
- February 2016
- January 2016
- December 2015
- November 2015
- September 2015
- July 2015
- June 2015
- May 2015
- April 2015
- March 2015
- February 2015
- January 2015
- December 2014
- November 2014
- October 2014
- August 2014
- July 2014
- June 2014
- May 2014
- April 2014
- March 2014
- February 2014
- January 2014
- December 2013
- November 2013
- October 2013
- September 2013
- August 2013
- July 2013
- June 2013
- May 2013
- April 2013
- March 2013
- February 2013
- January 2013
- December 2012
- November 2012
- October 2012
- September 2012
- August 2012
- July 2012
- June 2012
- May 2012
- April 2012
- March 2012
- February 2012
- January 2012
- December 2011
- November 2011
- October 2011
- September 2011
- August 2011
- July 2011
- June 2011
- May 2011
- April 2011
- March 2011
- February 2011
- January 2011
- December 2010
- November 2010
- September 2010
- August 2010
- July 2010
- February 2009