Question

Database Schema to Custom Writer Format - .EV extension

  • 11 November 2015
  • 6 replies
  • 3 views

Badge
I have a custom format I am trying to write to. It's for consumption into a program called Petrel (used for 3D modelling in the oil and gas industry) and ends with a .ev format. Here is a link to more info on the .ev format: http://petrofaq.org/wiki/How_to_import_completion_event_data_in_Petrel#Well_event_file_format

 

 

The challenge: Going from a structured database format to the following schema.

 

 

FROM

 

 

WELL-1 01/01/2004 perforation 4000 4020 .5 0

 

WELL-1 01/12/2002 barefoot 3500 .4 -2

 

WELL-1 01/01/2005 rework 3600 3900 .380.5

 

WELL-2 01/01/2004 perforation 4000 4020 .5 0

 

WELL-2 01/12/2002 barefoot 3500 .4 -2

 

WELL-2 01/01/2005 rework 3600 3900 .380.5

 

 

TO:

 

 

UNITS FIELD

 

WELLNAME WELL-1

 

--DATE EVENT MD1 MD2 Diameter Skin

 

01/01/2004 perforation 4000 4020 .5 0

 

01/12/2002 barefoot 3500 .4 -2

 

01/01/2005 rework 3600 3900 .380.5

 

 

WELLNAME WELL-2

 

--DATE EVENT MD1 MD2 Diameter Skin

 

01/01/2004 perforation 4000 4020 .5 0

 

01/12/2002 barefoot 3500 .4 -2

 

01/01/2005 rework 3600 3900 .380.5

 

 

In bold are the static schema/headers. In italics is the dynamic data.

 

 

The problem is that the output file essentially strips the database 'style' and puts the well name in a row as the header. Then groups the data below it. There is a break then the next well and it's associated data.

 

 

I thought about some type of fanout but need them in the same text file. It's not quite XML/JSON and not quite CAT.

 

 

Wanted to put this out to the community in case someone has a creative approach. Otherwise I may have to look at this programmatically.

 

 

Thanks!

 

 

 

Matt

 

 


6 replies

Badge +7
Hi Matt,

 

 

I did something similar a while back to convert GTFS to CIF, which is a similar text-based format. My process was as follows:

 

1. Use the TEXTLINE writer and build the space (or is it a tab?) delimited lines using a StringConcatenator to populate the text_line_data attribute. Use a counter to add a sequential ID to each line. Also add another attribute called "_wellname" containing the well name

 

2. For your "WELLNAME WELL-1" line, you'll need to aggregate all wells by name, and use a similar AttributeCreator, with a _count of "-2" so it comes before the header rows

 

3. In parallel, inject the header row by using another AttributeCreator - you'll want a text_line_data value of "--DATE EVENT MD1 MD2 Diameter Skin" and a _count value of "-1" so it comes just before the actual data

 

4. Inject the blank rows and file header using a similar process

 

5. Sort by _wellname and _count

 

 

I'm not sure that explanation made a lot of sense, but hopefully the general process of generating headers and then sorting all the data does? Let me know if not!

 

 

Cheers,

 

Roland.
Badge
Hi Matt,

 

 

I did something similar a while back to convert GTFS to CIF, which is a similar text-based format. My process was as follows:

 

1. Use the TEXTLINE writer and build the space (or is it a tab?) delimited lines using a StringConcatenator to populate the text_line_data attribute. Use a counter to add a sequential ID to each line. Also add another attribute called "_wellname" containing the well name

 

2. For your "WELLNAME WELL-1" line, you'll need to aggregate all wells by name, and use a similar AttributeCreator, with a _count of "-2" so it comes before the header rows

 

3. In parallel, inject the header row by using another AttributeCreator - you'll want a text_line_data value of "--DATE EVENT MD1 MD2 Diameter Skin" and a _count value of "-1" so it comes just before the actual data

 

4. Inject the blank rows and file header using a similar process

 

5. Sort by _wellname and _count

 

 

I'm not sure that explanation made a lot of sense, but hopefully the general process of generating headers and then sorting all the data does? Let me know if not!

 

 

Cheers,

 

Roland.
Hi @rollo - Your approach was flawless. Took me a second to realize I had to create each header row in parallel but once I got through that works great!

 

 

I also added a third row (_count = -3) as a blank which injected a blank row between groups.

 

 

 

 

 

Userlevel 3
Badge +17
Hi Matt,

 

 

Alternatively you can also concatenate records (data rows) for each well, using the Aggregator.

 

Group By: attribute that stores well name

 

Attributes to Concatenate: attribute that stores a record

 

Separator Character: newline - special character LF(\\n) or CR+LF (\\r\\n)

 

 

Then concatenate

 

"UNITS FIELD[newline]" for the first well / [newline] for other wells,

 

common header (2 lines),

 

and the concatenated records.

 

 

Finally write the resulting text strings into a text file with the Text File writer.

 

 

Takashi
Badge
Hi Matt,

 

 

Alternatively you can also concatenate records (data rows) for each well, using the Aggregator.

 

Group By: attribute that stores well name

 

Attributes to Concatenate: attribute that stores a record

 

Separator Character: newline - special character LF(\\n) or CR+LF (\\r\\n)

 

 

Then concatenate

 

"UNITS FIELD[newline]" for the first well / [newline] for other wells,

 

common header (2 lines),

 

and the concatenated records.

 

 

Finally write the resulting text strings into a text file with the Text File writer.

 

 

Takashi

Hey @takashi. I am trying your approach but I am getting a wonky (yes that is a technical word) output.

I've attached the workbench (template + sample data) below and was hoping you could take a quick look to see where I'm going wrong.

I've bookmarked in green the workflow that works and highlighted in red your workflow.

Workbench: db-to-petrel-ev-format.fmwt

I get pretty stubborn when I can't make things work and even though @rollo s approach works perfect I'm interested in skinning this cat multiple ways in the interest of future projects.

Userlevel 3
Badge +17

Hey @takashi. I am trying your approach but I am getting a wonky (yes that is a technical word) output.

I've attached the workbench (template + sample data) below and was hoping you could take a quick look to see where I'm going wrong.

I've bookmarked in green the workflow that works and highlighted in red your workflow.

Workbench: db-to-petrel-ev-format.fmwt

I get pretty stubborn when I can't make things work and even though @rollo s approach works perfect I'm interested in skinning this cat multiple ways in the interest of future projects.

Hi @matthewbrucker, you have to concatenate "text_line_data" with the Aggregator. Have a look at the attached workspace example: 469-db-to-petrel-ev-format-2.fmwt

Badge

Hi @matthewbrucker, you have to concatenate "text_line_data" with the Aggregator. Have a look at the attached workspace example: 469-db-to-petrel-ev-format-2.fmwt

Ok wow, thanks @takashi. That makes perfect sense now. I was aggregating the wrong attribute(s) instead of aggregating the text_line_data as you said. Works like a charm.

Reply