Thursday, December 17, 2009

Six O'Clock News Followup

The KTRK ABC Channel 13 news CO2 story mentioned in an earlier blog entry was aired last night. My cameo role was integrated seamlessly with the larger story line. An excellent background piece on local aspects of the CO2 sequestration issue. Thank you Ted Oberg for an enlightened and well-constructed piece.

Tuesday, December 15, 2009

iPhone update from AGU

My first AGU meeting. Lot's of CO2 sequestration talks yesterday; fault permeability, basin-scale storage capacity, steel tubulars corrosian rates, CO2 accounting strategies.

AGU is quite different from SEG. Abstracts are only a paragraph or two, rather than the 4-page exanded SEG format. Talks are similar length/style, but session chairmen give a nice intro that SEG could use. Another good idea at AGU is in open areas near meeting rooms, these are populated with large round tables with 8-10 chairs. On each floor of Moscone West were maybe 50-60 such tables. Great discussion pods. Also, a series of long tables with power strips for laptop hookup.

The exhibition floor is smaller than SEG and only opens on Tuesday (curious). But the shocker is the poster area: Easily larger than SEG exhibit floor, a vast warehouse of poster stands including large theme signs. Seismology, tectonophysics, deep earth physics, etc. Clearly the real scientific exchange here is at the poster. Compare this to SEG which puts the poster area in strange places that seem designed to deter all but the hardiest. At SEG 2009, the posters sat in a cavernous concrete space away from exhibit hall and presentation rooms. Downright gloomy, if not actually depressing.

Anyway, AGU got the poster thing right and SEG could learn a lesson or two from them.

Thursday, December 10, 2009

Six O'Clock News

Interesting phone call came in today. It was on the cell phone and caller ID gave an unfamiliar number. Below is my recall of the conversation.

"Good morning, is this Professor Liner? I'm Ted Oberg with KTRK Channel 13 News."
"Ah, Good morning."
"Do you have a minute to talk?"
"Sure, what's up?"
"Well I am reporting a story about CO2 sequestration and I saw you in the video at the University of Houston."    [Note: I am about half way in]
"You seem to make the concepts very simple, simple enough to explain on the air."
"Interesting, and thank you"
"The story is centered on an energy company in Dallas who is planning a CO2 pipeline from a CO2 field to an oil field for enhanced oil recovery. But they are also going to run it near potential CO2 capture industrial sites. The hope is that as CO2 capture and sequestration takes off, they will be able to have people tap into their new pipeline."
"I see."
"So do you think we could get together for a short interview?"
"When did you have in mind?"
"How about today? We could come by your office, say 1:30 or 2:00."
"Actually, I'm off campus today. How about if I come to the studio at 1:30?"
"That would be great."
"OK, see you then."

So the meeting came off as planned and I would like to thank Ted in this semi-public medium for the chance to bring some of these issues to a wider audience. I am honored.

In the interview we talked on camera for about 20 minutes and did a 'walking shot' down the hallway. He wanted to know about the big picture, how the carbon capture and sequestration activity might affect business and individuals. All this will be boiled down to a few comments embedded in the bigger story of the proposed pipeline, CO2 sequestration in Texas (there is very little so far), and the Copenhagen climate meeting.

The piece involving my interview will run on the six o'clock news on Dec 16, toward the end of the Copenhagen meeting when CO2 will be very much in the news. Unless you are a pro like Ted, you never know how you will appear in front of the camera. I will be a nervous viewer.

For an overview of my take on CO2 capture and sequestration, a good source is the first half of a recent seminar at I gave at The University of Texas.

For the more industrious readers, I maintain a wiki of CO2-related links. It gives some small indication of the scope of what is going on in these very early days of carbon capture and sequestration.

Saturday, December 5, 2009

Seismic goes multisource

[Note: A version of this blog entry will appear in World Oil (Jan, 2010)]

The role of seismic data in hydrocarbon exploration is risk reduction: Dry holes, marginal producers, or getting reserves seriously wrong. The method had early and spectacular successes, beginning with the 1924 discovery of the Orchard Field in Ft. Bend County, Texas. Those early surveys relied exclusively on explosives as an energy source, but this has obvious issues with safety and environmental damage. Just as limiting is the fact that explosive sources give us very little control over the emitted waveform; basically our only knob to turn for higher or lower frequency is the charge size. It is analogous to modern marine airguns. An individual airgun emits a characteristic spectrum that varies with gun size. Like a bell choir, small guns emit higher frequencies and big guns emit lower frequencies. In marine seismic acquisition we can compose any spectrum we want by designing an appropriate airgun array, but to do this onshore with drilled shot holes is a very costly enterprise. To be sure it was tested in the old days of land shooting when there were no other viable source options. There are some very interesting 1950s papers in GEOPHYSICS in which empirical equations were found relating charge size and depth to dimensions of the resulting blast hole. It must have been a wild for the researchers since charges up to 1 million pounds were tested.

But even as those experiments were underway, the seismic world was changing. An explosive charge contains a broad band of frequencies that form a pulse of energy injected into the earth over a brief time span, perhaps 1/20th of a second (50 milliseconds). In fact, Fourier theory tells us that the only way to build such a short duration pulse is by adding up a wide array of frequencies. William Doty and John Crawford of Conoco were issued a patent in 1954 describing a new kind of land seismic source that did not involve explosives. The new technology, called vibroseis, involved a truck-mounted vibrating baseplate. Vibroseis applies the Fourier concept literally, operating one frequency at a time, stepping through the desired frequency range in a matter of 10-14 s. Think for a moment about just one aspect of this advance, namely seismic power. With an explosive source the only way to put more power in the ground is to use a bigger charge. If one big charge is used we have to live with lower frequency, if an array of charges are used the cost skyrockets. With vibroseis there are many options for getting more power into the earth, these include using a bigger vibe truck, multiple vibes, longer sweeps, or some combination of all these. In addition to customizing power, vibroseis also allowed for complete control over the source spectrum. Little wonder that vibroseis soon became the source of choice for land applications worldwide, particularly after the original patents expired in the early 1970s. Today, explosives are used only in places where a truck cannot go, or because of some business rationale, like crew availability, or personal preference. From a science point of view vibroseis is a clear winner.

Over the last four decades, the land seismic industry has transitioned from 2D shooting with a few dozen channels, to current 3D practice involving tens of thousands of channels. The higher channel count allows shooting with tighter trace spacing (bin size), better azimuth coverage, and higher fold. These advances have lead to improved seismic data through better signal extraction, noise suppression, and improved suitability for processing algorithms. They have also lead to astromomical growth in survey size, data volume, and acquisition time. A big shoot in 1970 was a few weeks, today it can be a year or more. This is not just a matter of money, although this new kind of data is very expensive; it is time that matters most. Large seismic surveys are now acquired on time scales equal to drilling several wells, and are even approaching lease terms. If it takes two years to shoot and one year to process a big 3D, then a three year lease starts to look pretty short.

So what is the bottleneck? Why is it taking so long to shoot these surveys? The main answer goes back to a practice that was born with the earliest seismic experiments. The idea is to lay out the receiver spread, hook everything up, then head for the first shot point. With the source at this location, a subset of the receivers on the ground are lit up waiting to get a trigger signal telling them to record ground motion. The trigger comes, the source simultaneously acts, the receivers listen for a while, and data flows back to the recording system along all those channels. The first shot is done. Now the source moves to shot location 2, the appropriate receiver subset is lit up, the source acts, and so on. So it was when Karcher shot the first seismic lines near the Belle Isle library in Oklahoma City back in the 1920s and so it is with most land crews today.

The key feature of this procedure is that only one source acts at a time. Over the years, there has been great progress in efficiency of this basic model. One popular version (ping-pong shooting) has two or more sources ready to go and they trigger sequentially at the earliest possible moment, just barely avoiding overlap of earth response from one source to the next. There are many other clever methods, but in any form this is single source technology. It carries a fundamental time cost because no two sources are ever active at the same time, for good reason.

If two sources are active at once we will see overlapping data, similar to the confusion you would experience with a different conversation coming in each ear. Interference from a nearby seismic crew was observed and analyzed in the Gulf of Mexico in the 1980s, leading to rules of cooperation among seismic contractors to minimize the interference effect. Things stood pretty much right there until a recent resurgence of interest in overlapping sources, now termed simultaneous source technology (SST). Both land and marine shooting is amenable to SST, but we will explain the concept in relation to land data.

The promise of simultaneous source technology is clear and compelling. If we can somehow use two sources simultaneously, then the acquisition time for a big survey is effectively cut in half. Of course it is not quite that simple; some aspects of the survey time budget are unchanged such as mobilization and demobilization, laydown, and pickup times, etc. But source time is a big part of the time budget and SST directly reduces it. Field tests using four or more simultaneous sources have been successfully carried out in international operations, and the time is ripe for SST to come onshore in the US.

As if we required another reason to prefer it over explosives, the high-tech aspects of vibroseis lead to elegant and accurate methods of simultaneous shooting. The details need not concern us, but theory has been developed and field tests done that show various ways of shooting vibroseis SST data. The nature of the universe has not changed: When multiple sources overlap in time, the wave field we measure is a combination of the earth response to each source. But with some high-powered science, one response can be separated almost entirely, just as we manage to isolate one conversation in a loud room from all the others. The data corresponding to each simultaneous source is pulled out in turn to make a shot record, as if that source had acted alone. In about the time it took to make one shot record the old way, we have many shot records. A survey that would take a couple of years with a single source could be done in a couple of months by using, say, 12 simultaneous sources. An amazing case of "If you can't fix it, feature it".

For many years we have been shooting 3D land seismic data to fit the pocketbook, making compromises at every turn. Physics tells us what should be done, but we do what we can afford. Bin sizes are too large, fold is too low, only vertical component sensors are used, azimuth and offset distributions are far from ideal. When physics and finance collide, finance wins.

But now the game is changing. With simultaneous sources, the potential is there to make a quantum leap in seismic acquisition efficiency thereby driving time- and dollar-cost down. The seismic data we really need for better imaging and characterization of tough onshore problems may actually become affordable.