Forgot password? | Forgot username? | Register
  • Index
  • » Users
  • » Guest
  • » Profile

Posts

Posts

14-Oct-10 02:08:30
Crystal can't support Subreport within Subreport... now what?
Category: Using EMu

Hi Rowena,

Are you running this report from Events?
If you are, when selecting your fields in EMu try grouping all the Catalogue fields in one group.
So in that one group (called ObjAttachedObjects in my screenshot) you have all the Catalogue fields you require plus another group for you Dimension fields, and you also have in that ObjAttachedObjects group, your Section and Notes fields from Events (ObjEventSection_tab and ObjAttachedObjectsNotes_tab).

In Crystal report, add both tables eevents_csv and ObjAttac_csv (for the group in EMu ObjAttachedObjects) with a left outer join for the eevents_key in both tables.
When you add the fields to the Details section in Crystal, create a subreport for your dimensions (this report should have for a 'Field to link to' : ObjAttac_csv.ObjAttachedObjects_key).

So in your main report you have all the Catalogue information (minus Dimensions) and all the Events information as well. And your Dimensions as a subreport.
I'm not sure what you are trying to achieve but this should produce a report with all the information repeated for each row you have in the EMu table 'Objects Associated With Event' (whether or not there actually is an object attached in the row).

Is that what you needed?

Christelle

Hi everyone,

The agenda, transportation details and tour information has been posted on the User Group Meeting section of our website.

Please let me know if you have any questions.

Regards,
Sylvia

Hello everyone,

We have decided to make a small change to the format of the 2010 North American User Group Meeting. In lieu of Discussion Groups on one of the days of the meeting, we are going to try a "Showcase Session" (suggestions for better names are welcome!). The Showcase Session is an opportunity for brave members of the EMu User Community to stand up and showcase their EMu system via a live connection.

To give the showcase some structure, we ask that presenters attempt to answer the following questions:

What issue were you facing?
How would modifying EMu help you address the issue?
What modifications did you make?
How did the modifications improve the situation?

I'd expect that each session would be 10-20 minutes, and that we'd need 2-5 volunteers. I reserve the right to adjust these numbers based on success or failure of this year's sessions. In order to volunteer, you need to have some means of connecting to the EMu environment at your institution, ie. WebEx, VPN, Remote Desktop, Citrix, etc.

If you'd like to volunteer, or want to ask a few questions before putting up your hand for this, send me an email at brad.lickman@kesoftware.com, or call at 416-238-5032.

Cheers,
Brad

Hi guys,

Just a reminder that the 2010 North American EMu User Group Meeting is less than 1.5 months away. To register for the meeting, please email with a subject of 6th North American UGM Registration . If you have not started making travel arrangements, please do so relatively soon. For more information and hotel options, please check out our website.  Please note that KE Software will provide a shuttle service from the Hampton Inn Hotel to and from the museum.

Hope to see you all in Pensacola!

Cheers,
Sylvia

03-Aug-10 12:29:54
Things you should know about the EMuUsers Forum
Category: Announcements

How to change your password

If you know your password:

  1. Log in to the site using the log in box on the right (if your password doesn't work, try entering: keguest

  2. A menu will display at the very top of the site. Select My Details.



If you don't know your password:

  1. Select the Login tab in the right hand column.

  2. Select Forgot your password?

  3. Follow the steps to reset your password.

03-Aug-10 12:25:46
Things you should know about the EMuUsers Forum
Category: Announcements

How to update your profile:

1. Select Users>My Profile from the Forum Menu bar.

Hi everyone,

The National Museum of Naval Aviation and KE Software are pleased to invite you to the 6th Annual North American EMu User Group Meeting, which will take place at the National Museum of Naval Aviation in Pensacola, Florida from 14-15 October 2010.  As usual, the North American EMu User Group Meeting will be preceded by the Natural History Special Interest Group (NHSIG) on 13 October 2010.

If you are planning to attend this year please register soon. Please note that KE will not be reserving a block of rooms this year and thus you will need to make your own accommodation arrangements. We will be providing a shuttle service from Pensacola Beach to and from the venue. Please visit our web site  to view different accommodation options and for further information.

If you are interested in presenting or hosting a discussion group, please contact me via email or call me at 1-604-877-1960.

Hope to see you there!

Thanks,
Sylvia

Perhaps this has already been addressed and I missed it. I hope so.

The inability of the "Web 5 Objects" to easily include reverse attachments is, I believe, a fairly serious handicap to what is otherwise a very useful set of class objects.

Is there any plan to add this functionality? It would be very useful and would save those of us who are developing sites with the Web 5 Objects a lot of trouble.

Thanks,
Richard

Hi Kyle,

There are a few ways of doing this. The most basic and safe way is to use the efieldhelp table, which acts as a data dictionary of sorts. You can query this like any other table (eg: Select%20ColColumnField%20from%20efieldhelp%20where%20ColColumnModule=%27ecatalogue%27)

With this option, you’re accessing derived information. The table is built when EMu is upgraded. Therefore it is not a dynamic representation of the table structure. If the method used to derive this data is working properly, this shouldn't be an issue. Whether this might present problems really depends on the nature of the application you're building .

There are other more powerful options. However these are risky, complex and undocumented.

Regards

Forbes Hawkins
Museum Victoria

ok...so after typing a long post..only to have it be sucked into the ether by a login timeout...here is the short version of my post...

this is all in relation to a single keyword search box. so...if a user types in "motorcycle and trip and world not kayak" this string can easily by exploded into an array by a seperater of " and " or " or "...or " not ". Which i've done, but now i'm thinking it might be smarter to convert the string into something where the " and " and " or " and " not " have been replaced by special characters...then explode and loop over this string ("motorcycle,trip,world!kayak") and come up with your texql params:
$qry->Term("AdmWebMetadata", $xxx[$i]);
and
$qry->TexqlTerm("AdmWebMetadata not contains ".$aryNotValues[$i]);

is this the kind of thing you could post a snippet for. The Ands, Ors, and Nots are pretty easy on their own...but when you start thinking about all the combinations of things...it gets a little narly. so if you have some tips or code that could handle what i'm trying to do..that would be much appreciated. but in the meantime, i'll just keep plugging.

thanks
jason

Thanks, Simon...that was just what i was looking for.

I'm working on a php web interface to EMu...it looks to me like there is built in funtionality to boolean searches and special character searches in EMu, but i can't find any info on how to make them work via the php web interface.

For instance, i'd like to be able to do a keyword search such as "motorcycle" or "trip" to find all records that have either of those words.

I'm very new to this project...so maybe i'm missing something.

Thanks!
jason

We (KE and Ducky) have reached a compromise on the design of the Notes tab to be implemented in the base Conservation module.

Ducky's proposed design for the Notes tab was simply to remove the Bibliographic Reference field, as this was to be re-purposed on the new References tab. However, the design of the Notes tab as it exists in the current base Conservation module is a design agreed upon by NHM, AMNH, LACM, MUSIT and TEPAPA, and in the future will become the base Notes tab across all modules.

Ducky has explained that the original proposal for the redesign of the Conservation module was never intended to contain this Notes tab, and the current proposed redesign was to address this issue. However, in the interests of standardization, Ducky has agreed on a compromise to keep the Notes tab "as is".

There was also some discussion on the proposed References tab. Adding the fields "Page/Figure/Plate" and "Caption/Text" flat into the Conservation module (they currently exist in the Bibliography module) does go against good database design, but all parties agreed that in the interest of ease of data entry the proposed design of the References tab was acceptable. Please be aware that if in the future the Bibliography module is redesigned having these fields flat in the Conservation module *may* cause some headaches, but we'll cross that bridge if we come to it.

In summary, Ducky and KE have agreed to a compromise on the proposed design of the Notes tab in that the Bibliographic Reference field stays, and the References tab will be implemented as it is designed in Ducky's original proposal.

Thanks to everyone for their input on this matter, and specifically to Ducky for her understanding in reaching a compromise.

The Conservation changes will be implemented in the very near future.

Best regards,

Brad Lickman
KE Software (Toronto)

02-Dec-08 11:00:00
Category: EMu Administration
Forum: EMu Admin

Hi Cathy,

I'd like to make myself available to help, please watch your inbox!

Cheers,
Brad Lickman
KE Software

Alan,

It seems we can't access lin-emu.winterthur.org. Is this behind a firewall? Any chance it could be opened up to us?

Regards,
John

04-Jul-08 09:00:00
Category: Using EMu

This is just really a note to the list to let everyone know that this issue was eventually resolved. The comments offered by people were indeed correct. But the problem Tracy had been experiencing was due to a clashing Registry entry. The impact of this Registry entry was deceptive and so it turned out to be difficult to track. But once identified, removal of the Registry entry solved the problem and notifications are now working correctly.

Thanks,
John Doolan

03-Jul-08 09:00:00
Category: EMu Administration
Forum: Texpress

Alan,

The problem you are experiencing stems from Texql's poor attempt at optimising the table multiplication on the FROM line of your search. Texql includes several object-oriented extensions which means that it can support some pretty complex searches. But unfortunately this sometimes compromises the ability to optimise a search.

To be more specific, your FROM line contains two tables - "from ecatalogue, emultimedia". Depending on the remainder of the search, this can have the effect of simply multiplying the two tables and then applying the WHERE clause to the result, which obviously isn't efficient if your two tables are large. It would be adequate if your two tables had fewer records in them.

But Texql allows you to embed a complete table search on a FROM line. So instead of specifying "emultimedia" on the FROM line, you can put:

(
select all
from emultimedia
where exists
(
ChaRepository_tab
where ChaRepository contains 'ContentDM'
)
and exists
(
ChaAudience_tab
where ChaAudience contains 'Quilt Collection'
)
)

This has the effect of reducing the number of records in one of the tables on your FROM line.

You can do the same for ecatalogue. For example, assume you wanted to find only the objects whose name contained "red". Then you would replace "ecatalogue" on the FROM line with something like:

(
select all
from ecatalogue
where ObjObjectName contains 'red'
)

Now both tables on the FROM line contain small matching sets. Texql simply multiplies these. You apply the remaining join criteria to this. The entire search then becomes:

select ecatalogue.ObjObjectID, emultimedia.Multimedia
from
(
select all
from ecatalogue
where ObjObjectName contains 'red'
),
(
select all
from emultimedia
where exists
(
ChaRepository_tab
where ChaRepository contains 'ContentDM'
)
and exists
(
ChaAudience_tab
where ChaAudience contains 'Quilt Collection'
)
)
where exists
(
ecatalogue.MulMultiMediaRef_tab
where MulMultiMediaRef_tab=emultimedia.irn
)

This is likely to give you a more efficient result. I say likely because it very much depends on the entire search. Texql is very flexible as a language and gives you many different ways to achieve a result. Ideally the Texql optimiser would be able to work out the most efficient way to implement every one of these but unfortunately it doesn't. The way you specify your search can have a dramatic effect on performance.

You've kindly condensed your request into a brief email but it means that we don't have a full understanding of your overall requirements. My Texql suggestion above might help you but then again it may not and there might be much better ways to address the problem. To give more accurate advice, we need to understand the overall requirement and the characteristics of your data sets.

I suggest that we push this discussion into the KE support queue until we zero in on the most appropriate solution and then we can report the result back to EMuUsers.Org.

Kind regards,
John Doolan

03-Jul-08 09:00:00
Category: EMu Administration
Forum: Texpress

Hi Alan

Basically, my rule of thumb when using texql is to simply avoid table joins altogether, because in *almost* every instance, it is just too slow. Even the query that Simon mentioned will still crawl if you run it over a large dataset.

The performance of any texql query will be impacted by the
- size of the texpress index files being used
- cpu
- disk io

Take this query:

select ecatalogue.irn, emultimedia.Multimedia from ecatalogue, emultimedia
where
exists (MulMultiMediaRef_tab where MulMultiMediaRef=emultimedia.irn )
and
(ecatalogue.irn=1256704)

Can't get more specific than that. Even so, this query (with no other users on the system) takes over two minutes to complete.

Our catalogue = little over 1.5 million records; emultimedia = 325000 records. But we have a high end server. New; fast disk, lots of fast processors, 32 gig ram.


Now, if you have significantly smaller datasets than we do, you may find that a query like this, or the example Simon gave, will perform reasonably well. But this is a very simple query - more complex queries will take longer, and may time out (if issued using texxmlserver)

So Alan, I know it seems ugly, but really the only practical way to do this is to run queries against one table and pass the results to a second query. If you do this, you will find that it works really very well indeed.

Tellingly, this is the way that KE's own php objects work. And the EMu client too for that matter.

I do a lot of texql, texapi etc coding. My approach has been to develop wrappers around this stuff to work around some of these sorts of coding nasties, deal with strong typing etc.

One or two other points:

> If there are resources that details the operation of the texxml server

> Http interface doesn't accept joins?

This is nothing to do with texxmlserver. The performance bottleneck is with the texpress query engine. You will find little difference in query response time whether you use texxmlserver, texapi or command line texql. Is true that texxmlserver defaults to time out after 30 seconds (which by the way you can change) but really the problem is not the timeout - it's the query response time.


Simon > (has it ever been updated since the 90s?)

The documentation was last updated 14 years ago. Back then, documentation was not exactly user friendly and anyway Texql has progressed a long way since that time. So, continue doing exactly what you're doing - experiment and ask!

Regards

Forbes Hawkins
Museum Victoria

03-Jul-08 09:00:00
Category: EMu Administration
Forum: Texpress

Hi Alan

It exists (I'll let KE point you in the right direction), but there are couple of things to know.

1) it was written for vb5. It works on vb6. .NET is another matter. KE's vb api includes a wrapper library which handles memory management related stuff, but it does not work in vb.net. When I started working with .NET, I had to ditch the original vb wrapper component and write my own.

2) the texapi documentation is as old as the texql documentation, and perhaps even more out of date

3) KE don't as a rule compile the api libraries for win32. So what is available currently is a couple of versions old. They'll oblige you with an update if you ask for one - I haven't bothered lately cause the most recent version works fine for my needs so far

4) Most KE support staff don't work with TexAPI directly; their experience lies in other areas. So immediate assistance is not always so easy to get

5) If this is related to your questions about texql queries, TexAPI is not going to make things the slightest bit faster or easier for you.It will make things slower, take you longer to code, will take a while to pick up (in the absence of anyhardly documentation) and may also introduce risks to your data unless you know what you are doing.

If this is simply about getting data *out* of EMu, then I would stick with texxmlserver for the following reasons:

- it is rubust
- it is well supported by KE
- it is easy to use
- it returns pure XML, and is therefore easy for developers that are unfamiliar with texpress to work with
- there is plenty of example code to look at (check out KE's PHP objects)
- you have all the available options to work with the data that is returned; you can code in a variety of different languages / platforms.

I am certain that KE will back all this up if you ask them!


Regards

Forbes Hawkins
Museum Victoria

Joanna and fellow taxonomy users,

Our apologies for the slow response. Your email arrived during our Easter holiday period and several of us have been away for a few days.

In regard to your proposal for a new Taxonomy module, firstly we'd like to sincerely thank you for the effort that you and many others have put into this. As you are all aware, we are very supportive of any efforts to improve our taxonomic support and in particular efforts that result in a more standardized system and we hope that this proposal takes us a significant step towards that end.

In your email, you've identified three areas of concern - buy-in, development costs and rollout costs. These are somewhat intertwined but nonetheless I'd like to address each separately.

DEVELOPMENT COSTS:
This is a substantial development project. It is incompatible with the existing Taxonomy module and so cannot be implemented as a simple upgrade. Rather, we must create a new module. Besides, given that at least one very large museum has indicated it will not be moving from the existing module any time soon, we will be forced to maintain the existing module into the future. We anticipate implementing a new module and then allowing clients to choose which version - the old or the new - should be included in their EMu implementation.

We haven't costed the development fully but it is certainly in the tens of thousands of dollars. This in itself is not a problem. We have our own budget set aside for ongoing improvement of EMu and we would be happy to use some of that to fund a revision of the Taxonomy module.

Naturally our budget for ongoing EMu improvement is limited. The only criterion we apply to use of that budget is that the development be for the good of as broad a user base as possible.

ROLLOUT COSTS
Rollout of the new Taxonomy module to an existing site could be achieved as part of a normal upgrade. However, this proposal takes the Taxonomy module in a direction which is incompatible with the existing module. So while it is easy to put the new module onto a client's machine, their existing taxonomic data is held in the old module in an incompatible format. So at every site, there is a need for the existing data to be migrated to the new module. This migration will include some simple mapping of old fields to new fields but it will also involve splitting data, creating multiple records from single records, establishing links between records, populating attributes by inference from existing fields. In other words, it is not simply adding new fields to an existing model; it changes the structure of the existing data.

There will also be impacts flowing through to the catalogue. For example, most natural history catalogues draw some attributes from Taxonomy to aid in more efficient searching. These catalogues will have to be modified to draw from the new module and all catalogue records updated.

Additionally there are likely to be changes required to reports and to existing web interfaces.

We would anticipate that the core of the migration (i.e. the basic fields) would be relatively simple and would be common across all sites. However we expect that at many sites, if not all, there will be many local migration hurdles which have to be overcome plus changes to the catalogue, reports and the web. Thus there will almost certainly be some site-specific effort required for each museum. Each museum will also be required to extensively test the migration before going live to ensure that it has accurately mapped the data.

In addition to this, many clients - eleven in all - currently use sub-classed versions of taxonomy. As their existing Taxonomy modules are different from the standard module, the migration process to the new module will have to be customized individually for them. Additionally, those clients must decide if they want to have their existing customizations re-applied to the new Taxonomy module.

Unfortunately all of this involves time and effort - from both KE and each museum. Again, we haven't accurately costed this - in fact, it must be costed on an individual site basis - but if I had to guess, I would expect it to be around $5,000 at "simple" sites and perhaps $20,000 to $30,000 at the more complex sites (e.g. with lots of environments, plus a customized taxonomy module to begin with).

These are real costs and local to individual sites and so unfortunately we would have to pass these on to our clients. We would of course do our best to keep these rollout costs to a minimum. But the fact that there will be some rollout costs may have an impact on whether a museum adopts the new module or not.

BUY-IN
Which brings me to the issue of, as Joanna put it, "buy-in".

As I said above, it is very much in KE's interest that the Taxonomy module be improved and that we achieve more standardization of Taxonomy across the entire client base. It is equally important that we invest our development efforts on areas of benefit to as many users as possible (even though this proposal is of interest only to natural history museums who make up only a small proportion of the total EMu client base, they tend to be the largest clients).

The discussion on the forum has been excellent with some very good contributions and healthy debate. Clearly there are some clients very interested in the proposal. But comment has come from a small proportion of Taxonomy users and so it is difficult for us to assess how widely it is supported and indeed whether this proposal achieves the goals of wide applicability and increasing standardization.

It certainly appears that FMNH, AMNH, YPMNH, NYBG and MV fully support this proposal. We have already heard that the two largest museums - NMNH and NHM - have no plans to move to the new model.

But our records show that some 30 EMu clients have the Taxonomy module (we can't tell how extensively they are using it) and we would really like to hear from as many of those as possible. Does this proposal represent an improvement to our taxonomy support and do you intend to adopt it bearing in mind that there will be some rollout costs?

We look forward to hearing from all natural history museums.

Kind regards,
John Doolan

Hi Vincent

Short answer is yes; at Museum Victoria (MV) we are doing this, with Tiffs and DNG, which average around 120MB in size. (We're very comfortable with DNG; recommend you research the pro's/con's and form your own opinion re this).

Okay, now here's the fine print...

I am not sure if you need to be able to view these images in your EMu client. If so, then there are a few issues that you need to be aware of, for which (fortunatly) there are workarounds.

a) Thumbnail generation is performed by the EMu client. At this stage, generation of thumbnails derived from certain formats, including RAW, is unsupported

b) The image viewer built into the EMu client is unable to display some formats, inc. RAW

c) Not sure if you want to load hi-res tiffs. Tiffs are supported, but there can be significant performance issues when these images (or indeed any large files) are viewed using the EMu client, for two reasons:

1) The image has to be copied down to the client side from the server - larger the file, the longer this takes

2) The image has to be opened by the image viewer built into the EMu client - again, the larger the file, the longer this takes

Local caching of images can mitigate the 1st problem to some degree, but how effective this is depends on how many of these images you are looking at, how large they are, how large your cache is set to (which you can configure in your EMu client), and how fast your local disk is etc etc.

Even if local caching helps somewhat, the 2nd problem remains: a 50mb tiff takes more time to open in the image viewer built into the EMu client than does a 1000 kb jpeg.

These performance issues are very noticeable and (in our experience) frustrating for users if they are paging through sets of MMR records. When one of these records is viewed, the EMu client tends to hangs while it loads the image. This takes several seconds - or much longer for seriously large files.

Fortunately, it is possible to work around this.

-------------------
The TIFF Workaround
-------------------
At MV, we deal with large tiffs in the MMR as follows.

When we insert a tiff into the MMR, a jpeg derivative created automatically. It becomes one of the "resolutions" available to the record. So a jpeg "resolution" is available for every tiff.

We have written a script which runs on our server every night. Any tiff records in the MMR are modified, so that the jpeg "resolution" becomes the "primary" image, and the tiff "primary" image becomes a "resolution".

When a user next views the record the EMu client only has to deal with the jpeg version, which is much faster. The tiff remains available if you want to view it.

----------------
Dealing with RAW
----------------
We are only just starting to store DNG files in the MMR. So far, we haven't needed to worry about viewing these in the EMu client. If we did, then this is how we would deal with it:

You need a script which finds records for RAW files. For each record, it:

- creates a jpeg derivative (using something like Imagemagick)
- creates a thumbnail (Imagemagik again)
- makes the jpeg the "primary" image, and the RAW file a "resolution"

So now you can see the jpeg version of your RAW image from the EMu client (and from the web if you have this hooked up). The RAW image is still available as a "resolution".

This process works for any image format not natively supported by EMu (Jpeg 2000 for example) - as long as the image conversion tool you are using supports that format (Imagemagik is capable of supporting lots of formats)

-------------------------
Storage management issues
-------------------------
One other thing to consider; if you're intending to deal with large volumes of large files, then the default way of storing them on the EMu server may not be appropriate. EMu is able to resolve file paths from a number of different potential storage locations. Storing the larger files elsewhere can make it much easier to manage the files (backup etc) and may bring performance benefits.

At MV, we have a script running overnight which moves all files larger than 1Mb to our SAN. (It is possible to get this to happen on the fly but is easier to implement as a batch process).

------------------------
DIGITAL ASSET MANAGEMENT
------------------------
The following is not directly relevant to your question, but may be of interest.

At Museum Victoria, we are in the process of implementing a third party digital asset management system (DAMS). The DAMS will be integrated with EMu through the use of a web service; we developed the functional brief and asked KE to build it for us.

Integration of the DAMS and EMu is important; many of the images that (for various reasons) need to available in the DAMS also have a function in EMu, as a reference image attached to other EMu module records and/or images to be served to the web via pages built using KE's php classes.

The MMR will serve master/co-master images to the DAMS. DAMS users will be able to ingest these files directly into the MMR via the DAMS interface. Most of the time (but not always), these will be hi-res DNG/Tiff files

Many of the records in the MMR do not need to be available to the DAMS, and many images in the DAMS have no relevance to EMu. Therefore, we have included into the system the ability for records in the MMR to be flagged as either "external", "internal" or "Shared".

* Internal records are available as normal to EMu users; they are not exposed to the DAMS.
* External records are stored in the MMR, but are invisible to the EMu client; they are only available to the DAMS.
* Shared records are available to the DAMS, and visible in EMu; updates to "shared" records/resources may only be made in the DAMS.


Let me know if any of the above requires further clarification.


Regards


Forbes Hawkins
Collection Systems Senior Developer
Museum Victoria

05-Jan-08 11:00:00
Category: Using EMu

Happy New Year!

Under the Condition tab, Date Checked is automatically placed with current date, however, with the new year, the day is ahead by one. Anybody else having this problem? Any offer on how to correct this?

Thanks,

Mike Zaidman
Michael Zaidman | Senior Archival Administrator
The Jim Moran Foundation
100 Jim Moran Blvd-JMFDF010
Deerfield Beach, FL 33442
PH: 954.429.2175 | FX: 954.596.7498
E: michael.zaidman@jimmoranfoundation.org

22-Nov-07 11:00:00
Category: Using EMu

Hi Linda

The WISE system that Lee-Anne Raymond referred to is in fact MVWISE (Museum Victoria Wireless Input System for EMu), with which I think you're familiar (http://mvwise.museum.vic.gov.au).

There is no particular connection between the Tissue tabs in our catalogue and MVWISE. I believe it was mentioned because there has been talk here of using MVWISE for data entry in that area, but it probably doesn’t relate to your enquiry.

The concern about using scanners in labs here is referring to issues raised by our Metropolitan Fire Brigade, regarding the use of electrical and radio emitting equipment in close proximity to flammable liquids. We’re working to develop some safety measures to mitigate this concern.

Cheers,

Forbes

Hello,
We are in the beginning stages of implementing EMu here at the Natural History Museum of Los Angeles. We want to begin to create a limit to the terminology that will be used in the look-up lists to make searching easier in the future. We were thinking of creating a committee to set the terms even before much of the data is migrated (which would mean the different departments would have to do some further data cleaning beforehand). What have other institutions done to deal with look-up lists? Can anyone provide me with an example of the scopes of some of their look-up lists for a heavily populated fields like "Classification" or "Material"?
Thank you,
KT Olson
Conservation Technician
Natural History Museum of Los Angeles

14-Nov-07 11:00:00
Category: Using EMu

Hi Alva

Sorry for the delay reply. (I thought I have replied to this post already.)

When you say you want to create a new PowerPoint report, you are not trying to use a report that is available in the Client. In this case it is like creating a new Crystal Report. You will need to:

1. Create a new Reports entry by selecting the fields.
2. Create a new PowerPoint file (.ppt or .pps) as you would for a new Crystal report (.rpt). As PowerPoint is not a normal reporting tool like Crystal report, you will need to use VBA scripts (macro). This is similar to the Word and Excel reports. This is why you get a blank PowerPoint file when you click on "Yes".
3. After you created the PowerPoint file, then you click on Save and save the .pps or .ppt file into EMu.

In step 2, the VBA script will do two things:
1. Connect to ODBC.
2. Arrange the data on the slids and notes page.

For an example of how to connect to the ODBC, you can look at the Word and Excel examples in EMu Help. For example: Contents page -> Working with EMu Records -> Reporting -> Microsoft Excel -> Creating an Excel report using Visual Basic -> Step 2: Create the report in MS Excel -> 1. Write the VB code.

The other place where you can find an example is the existing PowerPoint reports on your Catalogue and Multimedia modules.

Hope this helps
Yanwei

18-Oct-07 09:00:00
Category: Using EMu

Hi Alva

Glad that I can help. Good to know that you find the PowerPoint report useful.

Regards
Yanwei

16-Oct-07 09:00:00
Category: Using EMu

Hi Alva

May I ask:

1. Which version of PowerPoint you have?

2. What is the macro security level on your PowerPoint? You can check by going to the Menu: Tools -> Macro -> Security. If your Security level is set to High, please set it to Medium.

3. Did you click on the "Run" button to run the report? Unlike a Crystal Report, a PowerPoint report does not automatically load the data. To load the data, you will need to activate the VBA script in the report by clicking on the "Run" button. If you have Security set to medium, there will be a pop up message asking you if you will allow this Macro to run. You will need to click on the "Yes" or "Enable Macros" button.

Hope this helps
Yanwei

21-Aug-07 09:00:00
Category: Using EMu

Hi Perian,

I understand your difficulty in trying to fit MARC into the structure of any Collections Management System... many have tried! MARC is designed for a different purpose and was developed many years ago. There is a school of thought that promotes a compromise on behalf of MARC systems and collections systems. Everyone tends to get to the point that they do not want to compromise their collection data structures sufficiently for MARC. Of course, if you significantly compromise MARC, it is not MARC anymore. Library of Congress has now moved to EAD which is more contemporary and blends well with modern collections data structures.

KE has considered accommodating MARC but the compromises have been prohibitive (and costly). Unfortunately, MARC will need to be squeezed into a collections data structure or kept separate. One solution has been to use web searching across two systems to allow common access to collections and MARC data. You still live with some compromise in the search criteria but you can maintain the integrity of both data sets.

Best,
Alan
Alan Brooks
KE Software Inc

t. +1 604 87 1960 Ext. 112
f. +1 604 877 1961

Does anyone have any information about the Used option in the Admin tab of the Lookup List module? Particularly in relation to "Location Hierarchy".

We are currently cleaning up the data used by the locations module and have some 'dirty' locations which need to be kept as they are used by the Movement History/in the Internal Movements module. However, we don't want the locations to be visible when relocating an object OR when creating a new location. A solution seemed pretty obvious: find the Location Hierarchy records for the 'dirty' locations and set Persistent=No, Hidden=Yes and Used=No. I did a quick Import/Export and the data as it should be except it is not functioning as expected. Some of the 'dirty' locations are no longer visible when relocating an object and some are. There appears to be no logical reason for why some are visible and some are not. I have tried logging in and out of the system but this has no affect.
Any ideas?

Liz

17-May-07 09:00:00
Category: EMu Administration
Forum: Texpress

Hi

Ahhh, false matches. The bane of every die-hard Texpress & Titan user for decades. It's like an old mangy dog; it really annoys you and is always hanging about - but is also kind of friendly and familiar and strangely comforting to have around.

> However, I am fascinated to note that your
> application correctly reports recordset size!
> For example, I searched for plants from
> Afghanistan...

Mike - the 'false matches' quirk won't happen on every query. It depends on the nature of the query, and the number of records returned.

False matches are a side-effect of the indexing methodology used by Texpress. The effect can be seen on any of the interfaces used to access the database - TexAPI (used by the EMu client), TexXMLServer, Texforms, and it will also have an effect on the JDBC driver.

The problem can be mitigated by tuning the configuration of the database index files. Reconfiguration can also improve query response times.

If you're noticing lots of false matches, you should look at fine tuning the database configuration because it will be affecting users accessing data from their EMu clients. You'll never get rid of false matches entirely, but the occasional configuration optimization will mean that you rarely notice it.

Configuration optimization is a rather technical process; it's kind of an art. Not for the light hearted. generally, it is something best left to KE, unless you are really familiar with Texpress and the EMu server environment.

KE have included details of the process in the EMu help file for those that are more technically capable - have a look there if you're feeling game.

It is really important to try it within a test environment first before applying it to your main dataset. Note also that you need to bring the database down to rebuild the index file, which can take hours for larger datasets.

Re the SQL export; I have posted stuff elsewhere about using TexAPI to export data from texpress > MySQL/Sql Server. Since then however, KE have introduced TexXML server, and so an approach like the one Seb is using is certainly the best option nowadays. Personally I'm impressed that he has actually managed to document it.

Search google for 'process xml php' and you'll find there's a lot of info there. I have also used XSLT for small database exports.

The need to export to an alternative database platform in order to do web development is perhaps debatable I think. Relational database platforms are more familiar to many developers, but EMu's "object orientated" approach can have its advantages too - especially for more complex Natural History datasets.


Regards

Forbes
museum Victoria

28-Apr-07 09:00:00
Category: EMu Administration
Forum: EMu Admin

The error message "TexAPI Error - Cannot connect to remote host (number 307) at offset 0" generally means that EMu is not able to establish a network connection with the server. If you encounter this message when first logging in, I would check to make sure that you are connecting to the correct host and port and that the EMu server is up and listening on the same port. If that is the case, I would check the network link between the two machines to make sure nothing is blocking the connection (i.e. firewall).

If you encounter this message when EMu is running, I would check your network connections to make sure that there is nothing that could be dropping the connection. It may be that your computer is going into hibernation and terminating the connection, or it may be that some network hardware between the client and server is set up to terminate connections after a set period of inactivity.

Matt McLaughlin
KE Software (Vancouver)

28-Apr-07 09:00:00
Category: EMu Administration
Forum: EMu Admin

I have been getting this error (Text API error) for quite some time and haven't been able to figure out how to get it to stop. Any insight or ideas would be greatly appreciated. It seems to occur when I leave eMu untouched for more than a couple of hours and then return to using it and the error message appears.

Thanks,

Michael Zaidman | Senior Archival Administrator
JM Family Enterprises, Inc.
100 Jim Moran Blvd-JMFDF010 | Deerfield Beach, FL 33442
PH: 954.429.2175 | FX: 954.596.7498
E: michael.zaidman@jmfamily.com
Web: www.jimmoranfoundation.org
Michael Zaidman | Senior Archival Administrator
The Jim Moran Foundation
100 Jim Moran Blvd-JMFDF010
Deerfield Beach, FL 33442
PH: 954.429.2175 | FX: 954.596.7498
E: michael.zaidman@jimmoranfoundation.org

30-Mar-07 09:00:00
Category: Using EMu

Hi Perian,

You could use a regular expression to specify an empty field. In the substitution window of your global replace,
in the text to find: field, type ^$ , which essentially means that the text is to start and end with nothing, therefore an empty field. In the Replace with: field, enter the text that you want to replace the field with. At the bottom of the Substitution window under the Options group box, you will have to check Regular expression.

Let me know if you have any problems with the above.

Thanks,
Sylvia

Hi Perian,

Unfortunately those pages are no longer being maintained in favour of the on-line help. An index similar to the old web page can be found in the help EMu Administration->The EMu Registry->Registry Settings->Alphabetical list of EMu Registry Entries. I would also suggest using the built-in search feature to help find what you need.

While it may be a bit of an adjustment to use the on-line help, I think you'll find that the examples, diagrams and more detailed explanations are well worth the adjustment.

--Mattius

Matt McLaughlin
KE Software (Vancouver)

27-Mar-07 09:00:00
Forum: Catalogue

Hi all;

My opposition to “batch” data collection for EMu is I think pretty well known, for reasons which I have explained at the various user group meetings over the past few years. So I’m not going to repeat my arguments here. However if there is anyone out there who hasn’t had the dubious pleasure of listening to me wax lyrical about what I see are the significant dangers of using this approach, please let me know :)

Okay, now with that said...

> We want to scan objects and change their locations: we gather a set of barcodes on a scanner out in the storage or collection area

It is technically feasible to develop this sort of functionality of course. But the devil is in the detail; the approach and methodology used will depend very much on exactly what your requirements are. You need to carefully consider how you want to work, and then think about these sorts of issues:

1. How “clever” do you want the system to be – do you want your scanners to be “aware” of existing EMu data, and to what degree?

For example, when you scan a location, do you want the scanner to be able to verify immediately that there is a corresponding location record matching that barcode in EMu? (which would require a local copy of some/all your locations data)

2. Data confliction issues and other problems may need to be dealt with during the “uploading” phase of the process. How smart do you want this to be?

For example, if you have scanned a barcode on an item, but that barcode can’t be found on the system, how do you want that to be resolved during the uploading phase?

If the current time on all of the scanners do not match exactly, then there is the potential for issues relating to data concurrency. How would you deal with that?

3. How would you deal with data loss due to scanner failure, battery failure etc? Perhaps enforce uploading of data when a certain number of records have been captured to minimise the risk? Perhaps ensure you have software & hardware that is capable of writing to RAM that is power-failure resistant?

Really these are questions about the level of risk to your data you’re willing to accept. If the answer is “none” then you’re going to have to be extremely careful when you design the system, and also be prepared to have staff willing to work around the system to some extent (instead of the other way around).

I have had experience with EMu, and with barcode batch systems. I can tell you that to develop a simple system that would allow you to scan a whole stack of barcodes on a windows mobile pda and then upload them into EMu would take a few days at most. But this would be a system with few checks and balances. A robust application which attempts (as far as is possible) to address data integrity concerns is much more complex.

So then – more detail is required to answer the question!


> We want to print barcodes to apply to objects

Sounds like you have a barcode printer yes? Ideally you would port your .NET code to talk to EMu - which I realise limits your options in terms of seeking development assistance :)

A perhaps less elegant but certainly practical solution would be to develop an admin task that allows you to

1. input a range of records to print
2. exports the barcodes from those records
3. updates the flag on the records

... then a local service or whatever on the client side picks up the exported file, performs whatever reformatting is required and shoots it through to the printer.

Can I just ask – have you actually done a cost analysis of printing your own as opposed to purchasing museum grade barcode labels? We found that it was significantly cheaper to purchase them.

By the way, if anyone reading this is wondering how to print barcodes to a normal printer, you can just download a barcode font. There are many barcode standards; the one called Code39 is probably the most appropriate in most cases, and there are free font sets available for this. You can then use that font in Office applications, Crystal etc, and print barcodes to your heart’s content. Not exactly asset grade of course...


In response to Perian’s question:

> But if there's a simple solution to integrating barcoding software and then syncing it with EMu, I'd like to know any details about it.

If you haven’t already, check out http://mvwise.museum.vic.gov.au :)


Forbes Hawkins
Collection Systems Developer
Museum Victoria

At the recent Natural History Special Interest Group meeting in Ottawa, three questions arose with respect to the automatic generation of Scientific Name. We at KE are not sure how to proceed and so I would like to get opinions from people about how each of these cases should be treated.

1. How should the Scientific Name be calculated for taxa whose rank is above Genus? Take for example a Family record. Should the Scientific Name be (a) the Family name, (b) the Family name followed by the author and year, (c) empty or (d) some other combination.

2. How should the Scientific Name be calculated for taxa with a rank of Genus? Should it be simply the Genus name on its own or should it include the author and year applicable to the Genus name?

3. For botanic collections, what is the preferred abbreviation for forma within the Scientific Name?

If you are able to offer an opinion, could you please indicate if you believe that it should apply for ICBN, ICZN or both.

Many thanks,
John Doolan

Hi Brett

Yep, you most certainly aren't alone. We have been developing with and using Texpress (and its prior incarnation Titan) for over two decades. We have migrated most of our databases into EMu, but several still remain on Texpress and will continue to do for some time to come.

We generally do all of our EMu admin using Texforms. Most of our EMu data migration is done in-house, and we have done quite a bit of alteration of EMu back end scripts etc. We also do a fair bit of development using texapi - mainly EMu related stuff (tools for loading images, managing EMu reports etc etc). One product we have released commercially (see mvwise.museum.vic.gov.au).

We even have websites that are still cranking out pages using (the now extremely redundant!) TexHTML (eg. see http://www.museum.vic.gov.au/bioinformatics/butter/). I think Victorian Hansard may still be running on texhtml (KE probably may do all of their development for them - not sure).

KE have a rather long list of clients on their website (http://www.kesoftware.com/texpress/clie … ion.html). Many of these organizations won't be running EMu/Vitalware - so they must be using Texpress. Not sure how up-to-date this list is but I think it's safe to assume that there are still quite a few other Texpress users (even developers?) out there...

Regards

Forbes
Museum Victoria

Yes, I believe there are a few of us out there running on Texpress sans EMU. At the National Herbarium of Victoria in Melbourne we have a texpress database of c. 750,000 specimen records and I know at least two other herbaria in Australia that run Texpress.

For web delivery we mirror our data in a mysql database outside our firewall. This works really well for us, although we are moving to Postgresql because of its better spatial support.

Cheers,

Peter

14-Sep-06 09:00:00
Category: EMu Administration
Forum: EMu Admin

Hi Forbes

we cheat.

the image transfer from client PC to server is done externally from EMU by having Samba create a share for a directory on the EMu box that the user's PC can connect to. The admin task uses image files placed in this directory (via samba not by EMu) - it doesn't pull the images across itself.

As far as being able to get EMu to pull stuff directly from PC to server - I agree it is great idea and would enable some pretty cool features. How to do it would be the killer. Using Samba a quick and cheap solution in interim.

hope above makes sense

cheers

Jon
--
Jonathan Kelly
KE Software Pty Ltd

Hi Kara

We have developed our own tool, which sounds similar to the one Jon has described. In addition to being able to enter data common to all images being uploaded, you can also source additional data from a spreadsheet so you can also upload data unique to each image. If an object registration number is included in the spreadsheet, it will use this to link the new image record with the appropriate Catalogue record.

Forbes Hawkins
Museum victoria

Hi Kara

not sure if exactly what you are after but you may be interested in knowing that KE have developed systems for Te Papa in New Zealand and The Powerhouse Museum in Sydney that allow users to upload multiple images to EMu from the EMu client.

AFAIK Te Papa are using their system (I think Grant Smith from KE Melbourne is the person who would be able to give you more on this)

The PHM system is not yet in operation (it is to be delivered as part of an upgrade and data load project to be done on new hardware they are setting up). PHM's is designed so users can simply copy files to a windows folder (eg by downloading from a digital camera) and then run an Admin task from the EMu client that brings up a simple dialog box where the user enters common field values that apply for all the images (creator, keywords etc). They then click the OK button on the dialog and the images are imported directly into EMu's multimedia module as records (with all the common field's set).

I think the Te Papa one has extra bells and whistles that Grant could explain better than me.

hope this of interest

Jon
--
Jonathan Kelly
KE Software Pty Ltd
http://www.kesoftware.com
Tel: +61-2-9299-3077
Fax: +61-2-9299-8167

14-Sep-06 09:00:00
Category: EMu Administration
Forum: EMu Admin

Hi

I was under the impression that the Admin Tasks mechanism did not support uploading files from the client to the server. However Jon Kelly's response in this forum thread [http://www.emuusers.org/Forums/tabid/57/forumid/22/view/topic/postid/827/tpage/1/Default.aspx} implies that it is in fact supported(?)

If it is not supported, I'd be interested in knowing if other organisations agree that this would be a useful feature.

Cheers

Forbes Hawkins
Museum Victoria

01-Aug-06 09:00:00
Forum: Multimedia

Hello,

We are looking to delve into the world of Jpeg2000 and are wondering what the general opinion is concerning it and its application to KE Emu. I’ve read the “cookbook” by Larry Gall which was most helpful in explaining away some of my confusion.

I have a few questions however that I hope can be answered:
1) If we proceed with Jpeg2000 Part 1 now, would it be possible to interface it with a website linked from KE Emu later on down the road?

2) What kind of effect will it (Jp2) have on the server memory?

3) How will people without the ability to see Jpeg2000 images on their computer, see the images from our website (once we go through KE Emu). Will they see the thumbnail view only?

4) Are there any museums/institutions currently using Jpeg2000 Part 1?

5) For those already using it, what limitations, drawbacks, concerns have you encountered?

6) When will KE Emu support Jpeg2000 and would it be better to hold off until that time?

7) Is it possible to take a group of Jpeg images and do a batch conversion to Jpeg2000?

8) What’s the recommended and limit per image size when uncompressing the image from the thumbnail? Also what’s the recommended size for the Jp2 thumbnail?

Many thanks,

Kerry Barrow
House of Commons, Ottawa, Canada.

Dear JP, Ducky and others who worked on this proposal,

Thanks for your efforts and congratulations on the final document. It is concise and well presented.

I had hoped that there might be a few more responses from users and so have been holding back my questions. But we would like to keep this moving and so I'll post my questions anyway.

My questions/observations are:

1. On the Information tab, you have shown a selection of 9 fields from the catalog. As you are aware, each catalog is potentially different. There is no guarantee of a catalog having any particular fields except for a very limited number (IRN, SummaryData, ExtendedData). This means that we can't design the module with these fields in it, even as defaults. The only option would be to design an empty panel (i.e. put the Group Box there). Then the module would have to be subclassed for every installation. No-one could use the base version. I think this is problematic. Alternatively, we could show only certain generic fields (SummaryData, ExtendedData). Sites would have to subclass the module in order to see any other fields.

2. I note that Assoc. Event, Assoc. Loan and Other Reason appear on both the Request and Information tabs. Are these the same backend fields or are they different? If the former, why do they need to be repeated? If the latter, what is the purpose of each set?

3. On the Description tab, I would guess that some institutions would consider that these data should be stored in the catalog. Indeed many catalogs have fields already for materials, measurements and description:

3.1. I note that you say the Materials can push back to the catalog, while Dimensions need to be pushed back to the catalog. Is there a distinction?

3.2. How are you anticipating that values would be pushed back to the catalog (notwithstanding the different catalog designs)?

3.3. Are there any security/permissions issues with conservators updating catalog fields?

3.4. When should they be pushed back? Conservation records will be kept in perpetuity. What should happen if a conservator were to update an old record? Should this push back the values in the Description tab to the catalog again?

3.5. Many catalogs have variations of the measurements grid. For example, some support multiple units, with automatic conversions between units. How would this be addressed on this tab? For example, if this tab did not have all of the dimensions columns that were recorded in the catalog, or it had different columns, then the values pushed back to the catalog would be incomplete or even incompatible.

3.6. Do you expect that these values would be copied from the catalog when a new conservation record is created?

4. How does the Condition tab relate to the Condition tab which appears in most catalogs? Do you expect the Condition tab in the catalog still to be used? If not, does this mean you do not want any audit trail of Condition statements?

5. Also in regard to the Condition tab, is this the Condition before treatment or after?

6. On the Int. Proposal tab, I note you can have more than one Authorization. Why is this necessary? Also, each Authorization entry can have an Approved? value of Yes or No. What does it mean to have both a Yes and a No in the same list? I note elsewhere that you want to implement some tab-switching based on the Approved flag but this is impractical if the flag is actually a list and not an atomic value.

7. On the Ext. Proposal tab:

7.1. You appear to have a list for Proposed By: but are only displaying one value. This is certainly possible but can be confusing to users. Was this purely a screen real estate issue?

7.2. For external objects, is it safe to assume that they would be documented in the catalog? If not, then what happens to the linking information and the "pushing back" of data?

7.3. This tab also has a list of Approved flags. How do these interact with those on the Int. Proposal tab?

8. The Treatment tab appears to support recording of the details of one treatment only. Is this adequate?

9. There doesn't appear to be anywhere to record the results of a treatment. Are you intending that this would be included somewhere in the Treatment text?

10. On the Analyses tab, I note that you allow an attachment to Multimedia using the standard record attachment feature. While this is technically feasible, I believe it is very non-standard and hence not particularly intuitive.

11. The Recommendations tab includes simplified versions of several fields that are included in most catalogs. How do you see these interacting with those catalogs? Should this data be pushed back to the catalog? If so, then this suffers from the same problems raised above.

12. On the Non-Digital Media tab, you suggest that the Locator field might be controlled by a regular expression. This is possible but of course will be very institution-dependent and so could lead to more sub-classing of the module.

13. In regard to Special functionalities, when generating duplicate records, what should be done with the object-specific attributes such as measurements, description, condition, handling instructions, etc.? Also do you have a concept for how you see the user interface to this working?

14. Do you believe that this model is adequate for institutions that contract out their conservation activities (as opposed to providing a contract conservation service to others)?

I still have concerns about the 1-to-1 relationship to the catalog. As you are aware, the existing module has a 1-to-many relationship with the catalog, which means that your new design is incompatible with data already stored in the old design. Should those with existing conservation records move to this 1-to-1 model then there will be additional data migration costs for them, which will of course make the decision more difficult.

Again thanks for your input and I look forward to your response.

John Doolan

Hi April,

We're using Narratives as you are but also for other contextual information, including non-Museum generated. We have a range of Narratives created by staff which adds context to the Catalogue records and others which have been created by Museum Studies students giving their interpretation of aspects of the collections.

We are also using Narrative development in a series of outreach projects enabling individuals and community groups to add their own interpretation. These have produced text, images and streaming video. All of these are delivered over the web
(http://emu.man.ac.uk/mmcustom/narratives/index.php) - there
are development issues for the website which are being addressed!

Best wishes,

Malcolm

Hi April

A number of institutions have used Narratives as a way to structure and develop web content. There are a number of presentations on the Conferences pages which discuss this.

Forbes
Museum Victoria

Hi JP,

This is my first post to this Forum and I wanted to say how much I look forward to further discussions. It has been a bit difficult to follow what appears to be a year-long discussion given that most of the posts here are dated from October 2005, however, my colleagues here at USHMM has brought me up to date about their involvement in the process of drafting a set of requirements for a core Conservation module.

Thanks for pulling this requirements document together. We’ve had a chance to review it and our comments/recommendations are included in the attached Word document. We look forward to hearing other comments as well.

We need to know a couple of things about where we go from here.

1. How long will the comment period be open?
2. What is the process for finalizing the document?
3. Will we use the Forum exclusively for discussion, or is there an opportunity for a group call, or other ‘get together’?
4. Can you update us on the timetable for the module development?

Thanks,

Angela

Attachment: 1519212193771.doc

Hi everyone

Here's the PDF

- Forbes

Attachment: 151256382771.pdf

Hi Alex/Joanna,

Joanna and I are trying to get the up2date package manager for redhat linux running properly on her system. This should solve the problems that Joanna is experiencing as up2date will install the package itself and any dependencies that go with it.

Brad

20-Apr-06 09:00:00
Category: Using EMu

Hi all

The Page View documentation is now available from the FAQ page (under the reports section).

Forbes

  • Index
  • » Users
  • » Guest
  • » Profile

Board Info