http://jytangledweb.org/genealogy/software/
Last Update: Tue Aug 02 18:53 EDT 2011
Initial Version Released: Dec 29, 2010
It is my view that we are still waiting for the best and brightest combination of genealogists and programmers to put their minds toward design and delivery of an adequate genealogy program.
These are my own top personal requirements for a viable genealogy program. Lesser requirements will be fleshed out as major ones are met by viable programs.
See my challenge to genealogy program vendors regarding source referencing at: http://jytangledweb.org/genealogy/evidencestyle/ .
And my latest work along the lines of improving the genealogy data model at: A Genealogy Data Model (Matrix Algebra) Specification . This specification explains in full detail a model that meets requirement 2 below.
Also see the notes of my thoughts at the bottom of this page on how this analysis affects GEDCOM and leads me to suggest an API model for programs to do On Line Data Mining.
Software programs shall be evaluated by how well they achieve each of the above requirements, in my personal opinion. The evaluations below are my own personal opinion formed from a combination of first hand experience, information gained from software descriptions and/or collective wisdom on the Internet. The evaluations for selected programs here are subject to change at any time, as is the list of requirements and programs. Constructive comments and suggestions are welcome. Email me. Note that the ordering above is not significant.
1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 | 12 | 13 | 14 | 15 | 16 | 17 | 18 | 19 | |||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Program | Version | MP | GDM | IGUI | Update | ES | Tags | Notes | ConLev | Rpt/Chr | PrtMan | MktShr | RIB | 3NF | 12AF | NW | ANO | OLDM | 100k GEDCOM |
NamCol | Notes |
FTM Windows | *** | ** | **** | **** | * | FAIL | FAIL | **** | **** | **** | ***** | FAIL | FAIL | FAIL | FAIL | ***** | ***** | ||||
FTM Mac* | 19.2.0.202 | *** | ** | **** | **** | * | FAIL | FAIL | **** | **** | *** | **** | FAIL | FAIL | FAIL | FAIL | ***** | FAIL | ***** | 100k GEDCOM hangs at 13 min (saving to database) |
|
TMG* | FAIL | *** | ***** | ***** | *** | FAIL | **** | **** | ***** | *** | ** | FAIL | FAIL | FAIL | FAIL | ***** | |||||
Legacy* | FAIL | *** | ***** | ***** | **** | FAIL | **** | **** | **** | *** | **** | FAIL | FAIL | FAIL | FAIL | ***** | ***** | 100k GEDCOM took 18 min | |||
Reunion* | 9.0c | FAIL | *** | ***** | **** | FAIL | FAIL | FAIL | **** | ** | ** | ** | FAIL | FAIL | ***** | FAIL | FAIL | ***** | ***** | 100k GEDCOM took 15 min | |
MacFamilyTree | 6.0.10 | FAIL | FAIL | *** | ** | FAIL | FAIL | FAIL | FAIL | ***** | |||||||||||
Cognatio* | 1.4.3 Light Edition |
FAIL | ***** | ** | * | FAIL | FAIL | FAIL | ** | FAIL | * | * | FAIL | FAIL | FAIL | FAIL | FAIL | ***** | 100k GEDCOM failed immediately on syntax |
||
Gramps* | *** | FAIL | FAIL | ** | *** | FAIL | FAIL | FAIL | FAIL | ***** | |||||||||||
GenealogyJ* | *** | FAIL | FAIL | ** | *** | FAIL | FAIL | FAIL | FAIL | ***** | |||||||||||
RootsMagic | FAIL | FAIL | *** | **** | FAIL | FAIL | FAIL | FAIL | ***** |
A star (*) at the end of the Program Name indicates that I not only have gathered wisdom from other sources, but I have and run a copy myself, but perhaps not the latest version.
- Not Evaluated |
|
FTM = Family Tree Maker |
---|
My view of GEDCOM (GEnealogical Data COMmunication) implementations is that they are an antiquated and outdated bolt-on to an old data model that was useful a long time ago, but currently deserves no place in a contemporary data model, and is, in fact, no longer needed as it is inherently defined in contemporary models. (although perhaps not the model that your favorite program uses!).
As I have shown in my data model treatments (see the top of this page), for those who have taken the time to understand them, there is only one simple, and achievable requirement for full data export and import. It is the same requirement that is needed for full data representation.
The one requirement is that you define a full set of data variables. There, you are done. Full export and import is now a simple programming exercise, as is moving your data from one program to another. Simple.
The hard part is defining this superset of data variables. This is the work of intelligent and knowledgeable genealogists and a great starting set could be had in under five meetings with even a small gathering of a few of the best and brightest genealogists and programmers at the round table. No open democracy, we don't want to carp about it for decades. Nor waste time listening to those that don't "get it". (yes I said it!). (just think if the Internet protocols were subject to any users "input" on designing them. No thank you). They can issue a report for comment by all for a period of time, and at the end of that time meet again, and decide what points have merit and discard those they deem that don't. That is the way progress will be made. Simple. The data variables would consist of name fields (three please), address fields (at least 12 please), and also source reference data fields (I delineate 577 on my web model data site, as I recall at this writing), and the rest of the required fields that have been learned by genealogists and vendors writing the codes to date.
This list will evolve over time, of course. But the closer to a complete set it is, the smaller the tweaks will be, until reasonable genealogists cannot think of any required extensions. And with new and modern codes, small tweaks will be much easier to achieve than in the legacy (small l!) codes of today.
If one program develops such a data model, it will be trivial to export its data so that any other program will be able to import it. With one big caveat. That is, the exported data has to be dumbed down to the capability of the program that will be importing it. But if the data represents a full set of variables, this WILL ALWAYS be possible. It will, of course, NOT be possible for a data deficient program to export data so that it can be imported intelligently by a smarter (full data model) program. But once a full data model variable set is defined by intelligent genealogists, who would want to use a program that does not accomodate that full set? Not me.
I thought I'd restate all this here. It is also explained in much more detail at: A Genealogy Data Model (Matrix Algebra) Specification . My hope is that program vendor management study and understand the data model and its implications. They all seem to be locked into the past with outdated data models and programmers just tweaking outdated models and codes.
I have tuned out of the BetterGEDCOM effort (sadly, keeping the GEDCOM in the name is perjorative, in my opinion). I think all they, or some group, needs to do is to produce the full list of data variables. They would be done. Except for turning it over to programmers to implement. If I were younger and more energetic, I see a great opening for a business project plan pitch to venture capitalists to fund a genealogy program start up. But the larger vendors of today are so close, yet so far. Maybe they will get there some day. I've been waiting for four years now. Despair caused me to produce my analyses.
As pointed out above, FTM has built this into their program and thus makes their program very attractive to all genealogists. However, many genealogists strongly prefer maintaining and displaying their data with other programs.
Perhaps a good solution for both the programming vendors and Ancestry (and LDS's FamilySearch.org that they seem to be partnering with more and more) would be for them to develop a genealogy data mining API (Application Programming Interface) that they could license to other program vendors.
This would not prevent them from offering their own genealogy program, but it would open the door to them making licensing fees from users of other programs. I would think nearly all users of all programs would be happy to pay a fair sum to be able to directly mine the data at Ancestry and FamilySearch.
I would think that a cost model to use the API could be made proportional to the installed user base of a given program, and the vendor could choose how to pass that along to the end user at purchase time, or as an added feature, or... This would allow the small "mom and pop" program vendors to buy in to this feature at a low entry cost (hopefully).
Developing such an API should be a joint effort of a "best and brightest" team of programmers and genealogists from the companies and users. If and when such a model becomes attractive to all stake holders.
I'm not sure that Ancestry really wants to be in the advanced, professional genealogy program business from their behavior to date, so allowing competetition that do offer more professional programs, and collecting fees for it, may be an attractive model for them, other program vendors, and users. Ancestry could then focus on "getting the data right" and less so on the tool that allows them to make more money from the data. (I suspect programming a genalogy program is not a core interest of the company vs. making the data available, but I could be wrong).
In absence of this model, I predict that any vendor that cannot offer direct data mining will slowly be squeezed out of market share (even smaller niche markets?) and probably eventually out of business as FTM (or other direct data mining programs) evolve to be able to handle professional genealogist needs.