Hi,
I need to extract login sucessful / unscessful info from web-seal log, I tried importing it but I get the following message
Line 14, position 2: There are multiple root items elements.
I think this error is because an entire log file (audit trail) is not a single well-formed XML document.
Each audit event in the log file is written as a standalone XML data block (<event ...> ... </event>). Each data block conforms to XML syntax.
Could you please let me know how we can extract data from web-seal log in this case.
Thanks in advance.
Kind Regards,
Philip
extracting data from web-seal log
Moderators: chulett, rschirm, roy
-
philippatlur
- Participant
- Posts: 1
- Joined: Mon May 23, 2011 7:43 pm
I am not familiar with this log ---- but if, as you noted, that it is a "collection" of little xml documents (each one on its own line with a new "root" element), then there are two options to consider...[this assumes that the xml documents are all the "same" in general structure, albeit with different actual values/content]
1) find the xml schema definition from the vendor for this log xml. Ask them for it or see if it is in their documentation or on their distribution. It might be there, or it might not even exist. If they have it, you may be able to import that....or use it in a tool like xmlSpy or xml Oxygen or other tools on the web to "create" a complete xml document sample....then import that.
2) find what you think is a "complete" sample among the log records. Vendors write xml in various ways....they might include un-used elements as <element/> or they might leave them out altogether. Your goal is to have an edition of that xml "record" that is very complete, and then import that.
Once you have the metadata, then you need to look at ways to read this file.....in your case, you need to treat each individual "record"as a single xml document. It's likely that each xml is terminated by an end of row indicator or some other delimiter. Your first goal is to be able to read the sequential log file, one full row at a time, and be certain that you are getting the "whole" xml document into a single large column.
While you are doing that, you can practice reading the "one" xml document that you pulled in order to get the metadata (using folder or External Source, and xmlInput).
When you have both of these techniques working in their own jobs...bring them together.
Ernie
1) find the xml schema definition from the vendor for this log xml. Ask them for it or see if it is in their documentation or on their distribution. It might be there, or it might not even exist. If they have it, you may be able to import that....or use it in a tool like xmlSpy or xml Oxygen or other tools on the web to "create" a complete xml document sample....then import that.
2) find what you think is a "complete" sample among the log records. Vendors write xml in various ways....they might include un-used elements as <element/> or they might leave them out altogether. Your goal is to have an edition of that xml "record" that is very complete, and then import that.
Once you have the metadata, then you need to look at ways to read this file.....in your case, you need to treat each individual "record"as a single xml document. It's likely that each xml is terminated by an end of row indicator or some other delimiter. Your first goal is to be able to read the sequential log file, one full row at a time, and be certain that you are getting the "whole" xml document into a single large column.
While you are doing that, you can practice reading the "one" xml document that you pulled in order to get the metadata (using folder or External Source, and xmlInput).
When you have both of these techniques working in their own jobs...bring them together.
Ernie
Ernie Ostic
blogit!
<a href="https://dsrealtime.wordpress.com/2015/0 ... ere/">Open IGC is Here!</a>
blogit!
<a href="https://dsrealtime.wordpress.com/2015/0 ... ere/">Open IGC is Here!</a>
