Halfway There!

Photo by author

The crocus have bloomed in our lawn and garden slopes. The first signs of spring, these most hopeful of flowers often will bloom through the snow as they let us know that warmer temperatures are coming. We look for the first signs of spring and cherish them.

I just received my first dose of the coronavirus vaccine today. Like the crocus, the increased spread of the vaccines offers hope that the world may emerge from the doldrums of winter into the bright sunlight of summer. Perhaps we can look forward to those things we once took for granted, like enjoying a meal at a restaurant, or singing in our various choirs without fear of contagion.

Being the science nerd that I am, I am glad that I received a dose of the messenger RNA (mRNA) vaccine. There is no better example of technology to demonstrate how far we have come to understanding the genome and how to influence the biology of life. To think that we can now introduce fat particles in a vaccine that are designed to penetrate cell walls. There they will deliver their dosage of mRNA that causes the immune system to recognize, and attack, invaders that have the distinctive type of latching mechanism of COVID. No longer do we need to rely upon killed versions of the whole virus (although the Johnson and Johnson vaccine does rely upon a killed virus other than the COVID to produce immunity). Most of the arguments against vaccines, like they rely upon cells from aborted fetuses, or contain aluminum or mercury, are rendered moot by the nature of the mRNA vaccines. They contain no items that were ever alive, and thus avoid most of the qualms that the anti-vaccination crowd should have.

But never doubt the ability of those who do not understand the underlying science to sow doubt even with this brand-new mode of vaccine delivery. No, since the scary boogeyman of DNA is invoked, those who must protest medical advances to justify their own superiority have declared that these new vaccines will hijack your DNA, and cause unspeakable mutations that will show up in later years to enable the goal of population reduction to be achieved. Whose goal? Why, Bill Gates of course. It is he who has taken the place of George Soros in many regards as the face of evil for those who refuse any scientific advance. Bill has mandated the insertion of microchips into the vaccine, so that the vaccination status of all may be determined by a simple scan, and your access to travel, recreation, and money can be held hostage to your vaccination status. I’ve even seen discussion about ingesting a horse de-wormer (Ivermectin) as a treatment regimen rather than subject people to the vaccine.

Since I was trained as a chemical engineer, and have studied biochemistry and other related fields, I have much less fear of products developed through adherence to the scientific method. I wish there were a way I could convey the knowledge I have gained to those who are insistent upon believing the pure BS that is spread through on-line media. But I should not be surprised. Those who believed in the past president as being a great businessman have proven to be remarkably recalcitrant in abandoning their adherence to worshiping the great one. You need look no further than to see the adulation given to the golden statue of their chosen leader this past weekend at CPAC. Those who have been taken in by a scam artist, are loathe to admit their own folly. It has been said the mankind goes mad in herds, but come to their senses one by one. We who believe we are the rational ones, will never go and win an argument with those who are still in the throes of their delusions.

Anyway, I’ve now had the first dose of the vaccine. I anticipate that when I receive the second dose in about four weeks, that my immune system will already recognize the new dose as an interloper that must be attacked, and I expect to feel like crap for a day or so. That is one small price to pay compared to the price that can be exacted from full-fledged infection with the virus. I’ve been following the progress of a friend who was severely infected, with pneumonia from the virus. Just the stories of his near brush with death, and the tenacity he’s had to use to battle back, let me know that I don’t wish to share his experience.

In some ways this pandemic has served as a wake-up call for the world. Imagine if this airborne virus had the lethality of Ebola? I think that’s what some deniers are maintaining, that if people aren’t dropping dead on the streets, then this is not a disease worth fearing. I really am hoping that those in charge of political power decide it is worthwhile to pay for insurance policies for the future. What does that look like? It looks like scientists from multiple nations working at facilities across the globe, ready to sound the warning when they detect a disease of concern. It looks like adequate stocks of protective equipment kept on hand. It looks like an analysis of supply chains, and investment to harden those supply chains, so that we are not subject to interruption of those supply chains for vital supplies when the world shuts down for the next pandemic. Because one thing we know. As long as mankind keeps impinging upon virgin territory, we will come into contact with new and more deadly diseases in the future. May we have the wisdom to actually learn from this episode, instead of adopting amnesia as a coping mechanism.

Off We Go, Into the Wild Blue Yonder

rocket launch space discovery
Photo by SpaceX on Pexels.com

We are living in a golden age of exploration. Part of the human nature has always been to push the boundaries, whenever and wherever there was incentive. The spices of the Orient, along with unknown riches, tempted the explorers of Europe. Now, we are in a race for space. With the recent launch of the Perseverance mission to Mars, that planet is now infested with both unmanned rovers, and orbiting observers. Participants in this infestation include India, China, the joint venture between the Russians and Europeans, multiple missions from the US, and the mission from the United Arab Emirates that recently launched. For millennia humanity watched the planets, convinced that they held great influence over our existence on Earth. Though we may laugh at astrology today, it is undoubtedly responsible for the growth of knowledge about patterns in the cosmos, due to the need to know what the positions of the planets were at the time of the birth of individuals.

Indeed, astrology still has millions of adherents, convinced that the orientation of the planets hold the means to provide order to a seemingly chaotic life. But once our understanding of the cosmos went beyond mere observation, to a systemic search for knowledge, we have been merciless at trying to uncover the mysteries of our solar system neighbors. We have seen evidence of great floods on Mars, and the search continues to see if we can find direct evidence of life elsewhere, either from the past, or tantalizingly, still alive somewhere under the Martian surface.

Now there are private businesses aimed at the conquest of space. These are not just the vanity projects of the new tech aristocracy, but serious attempts at commercializing both near Earth exploration, and eventually solar system exploration. It will be difficult to provide a positive cash flow from these activities, but what we’ve seen is that companies are willing to fund the immense investment in space vehicles. We’ve weathered the gap between NASA’s shuttle (2 catastrophic failures out of 135 missions), to launches to the space station from US vehicles. What is different now is that it is a private corporation, SpaceX, that has contracted with NASA for a series of launches. The first of these launches just splashed down in the Gulf of Mexico, completing a seemingly flawless flight. SpaceX is in competition with Boeing, Boeing concentrating mainly on heavier launch capabilities. Since private enterprise is funding the research, they will be looking for payoff well beyond what government contracts provide. And that is why there truly is a new age of exploration, one that will result in humans setting foot on Mars sooner rather than later. The moon will also be revisited, and with the current missions aiming at prospecting for ice on the surface of the moon, it may actually be possible to build a base on the moon itself.

Why do this? Well, I for one think it much better to use humanity’s creativity in exploration, rather than in building munitions and munition delivery systems. Although there has been significant advances in our understanding of physics, metallurgy, and chemistry through the development of better means of destruction, the use of these tools comes with immense human suffering. And as we’ve seen in the recent explosion involving ammonium nitrate in Lebanon, it does not take a lot of technology to spread a lot of death and destruction.  Spending the money on science to increase our range and knowledge instead of on destruction seems a much more humane way to proceed.

Besides, there is much to learn. Even beyond the possibility of life on Mars, there is the tantalizing thought that life may exist underneath the ice caps covering the oceans of the satellites of the larger planets. The question we have is whether life is ubiquitous in the universe, spreading wherever the chemical conditions couple with the source of energy to power life. If we do find life outside of our planet, it will have immense repercussions among the world’s religions. They must see if their theology can adapt to life existing in multiple locations, and allow for a creator that likes to experiment, rather than one totally vested on earth. I have always thought that limiting a creator to a single site in this immense universe did a disservice to the creator, since it imposed such tight restraints on its capabilities.

The age of exploration we live in goes well beyond the physical limitations of earth. We have been exploring the intricacies of the genome, learning the secrets to manipulate the formulas of life for our own benefit. Tools such as CRISPR, and DNA sequencing improvements, are leading to the possibilities for us to deal with the microscopic universe. Those abilities are coming into play now with the unprecedented speed in which vaccines against COVID-19 are being developed. Back a generation ago, we would not have to knowledge to sequence the genome, learn its tricks for attaching to cells, and develop multiple ways to fight against this novel virus. It would have taken years of painful trial and error work to possibly come up with a vaccine. Today? We may have 3 modes of action incorporated into vaccines, and testing could be complete within a year from the initial confirmation of the virus’s structure.

Many ask why do we spend money on exploration, when we cannot meet our needs on earth. My answer is that it is through exploration and research that we discover the ways to increase the economic pie, thus allowing for a greater share for each individual. It is only through the growth of economic activity engendered by the discoveries from research and exploration that we can avoid the Malthusian fate that would otherwise engulf us.

Technological Change Over A Career

Control room

I went from college into a career with DuPont starting in 1976. After an initial assignment, I worked in a process that used a DEC PDP-8 computer for data monitoring and for control of certain critical process parameters. This was in a process that produced hydrogen cyanide, so reliability of the computer system was critical. This computer, when it was booted up, required setting toggle switches in order to start the sequence. Then a paper tape was run through a reader, and the system would lurch into operation. This modern machine also used punch cards for program input.

By 1984, our company began to use IBM PC’s. To have this type of power upon your desk was amazing. These were not used for process control, but enabled us to have the power to write and distribute through an e-mail system documents that bypassed the old strictures of communication. If you can imagine now living within a hierarchical system that required all communications to be written by hand, approved by supervisors, typed by a secretary, then copied and sent through corporate mail systems, that was the world as it existed in my company. It was the same as existed in most other companies around the world. It was also a first introduction to the ability of technology to replace jobs. Secretarial positions shrank in number once they were not needed to serve as a key link in the communication process. Those who remained either had to be flexible enough to pick up other skills, or became administrative assistants to those high-ranking administrators where it was still valued to have someone to serve as an intermediary.

In the late-1980’s, the chemical process I worked with had a computer used exclusively for process monitoring. We had gotten past the toggle-switch and paper tape process, but I learned techniques for data compression. For each variable that was monitored, you got to choose how much change you would allow in the value before another data point was recorded. Computer memory was still limited, so it was necessary to use a bit of judgment to tweak each setting so that any signal noise was eliminated, but significant changes in variables were recorded and could be graphed. This computer also held the statistical program Minitab, which helped in determining correlations and other relationships between variables. I began using that program in 1991 to start tracking the performance of my 401K investments, a spreadsheet I maintained until my retirement in 2014.

This process had computers to monitor variables, but it still had individual control loops. Each variable that needed to be controlled had a piece of equipment on a panel board. Out in the field, a sensor would provide a reading that would be transduced into a 3-15 PSIG (Pounds per Square Inch Gauge pressure) signal in the field. That signal would be fed into a small metal tube and routed back to the control room, where it got transduced back into an electrical signal, and fed into the controller. We would use controller logic to provide the optimum settings for this particular loop to keep the loop stable. There was also a signal splitter that sent the signal to a chart recorder, where a paper chart was fed through and multi-colored inks were used to display multiple variables onto a single recorder. Normally there was a maximum of three variables on a single chart recorder. The electronic signal was also sent to the monitoring computer as well. Now, if modifications were made in the field, say for a new piece of equipment, it would require running a new piece of tubing from the field back to the control room, placing a new controller into the metal board, and installing all of the sensors and transducers to enable the system to work. The entire process was labor and capital intensive, and required a significant amount of operators, electrical and instrument (E&I) mechanics, and engineers in order to maintain a plant and keep it operating safely.

During the late 1980’s and early 1990’s, the next change in process control occurred. Distributed Control Systems were commercialized. These computer systems replaced all of the controllers on a panel board with two computer consoles and a pair of keyboards for entry of commands. These computers used wire pairs directly from the field to provide their input, so no longer were 3-15 PSIG transducers or metal signal tubes needed. You always installed extra wire pairs in a wire bundle from a field signal box back to the DCS so if you expanded the number of controls or signals in the field, it only required installing the last leg of the wiring in the field.

The displacement of workers with computerization was huge with this step. The number of E&I mechanics used to keep these systems up was much fewer than before with all of the signal transducers and individual controllers and chart recorders. Chart recorders were totally dispensed with, as all records were retrievable via computer. And fewer control room operators were needed, since no longer was it necessary to go up and down the control panel and record readings every few hours. A single operator could maintain the entire process by himself, and the back-up operator could be assigned into the field for a portion of his shift (most but not all operators were male). Even with engineers, there were fewer needed, since control loop tuning was all but eliminated with the new algorithms available in the computerized systems. There is no wonder why the population of workers in my plant kept going down, year over year. It became a ritual that every 2-3 years, we would undergo a purge of excess people. Not all of it was due to automation, since world economic conditions rendered multiple chemical processes uneconomical, but at least half of the reductions in force were due to automation.

So during the roughly 15 years while I was directly supporting chemical manufacturing, constant changes in technology kept paring the need for employees of the company. At the same time, the support staff kept shrinking as well. Whereas we once had an entire group of workers tasked with maintaining and updating blueprints (I can still remember the ammonia aroma of a freshly printed blue-print), they left once all print updates were done on the computer. Since most documents traveled by e-mail, the need for physical mail distributors went way down as well. Combine that with the growing international competition in the chemical industry, and you will understand why the plant I worked at in West Virginia did not hire any hourly employees for a period of 20 years. If you really want to know why the middle class has atrophied in the US, just look at the jobs that were displaced due to technology improvements during the time from 1975-1995. And the technological changes have only increased since then. That is why the talk about Making America Great Again by revitalizing manufacturing rings hollow. The direction manufacturing has taken involves replacement of people by technology, allowing a smaller number of people to maintain a growing production output. We’d best be thinking about how to restructure the workforce to pay wages that reflect the value society places upon the work, rather than weigh everything on the scales of economic efficiency.

Celestial Billiards

meteorite-1060886_960_720

Earlier this year I wrote about some of the risks facing humanity. I’ve begun to expound on those risks with additional information. Here is the first risk in the list, Celestial Billiards.

One risk we face that is certainly out of our control involves our environment. Not the environment on Earth, but the environment in the universe. There are many, many forces out there in the universe, and they care not in the least that they may affect life forms on our planet should they interact with it. There are many objects flying around in our solar system that can (and eventually will) intersect with our planet. If they are large enough, they can wreak havoc upon a city, or a nation, or upon the entire earth. Modeling of the impact of the Yucatan body that brought the end to the dinosaurs shows that the entire atmosphere of the earth was aflame from the impact and subsequent reentry of the material thrown out across the globe. Only the creatures burrowed into the ground, or shielded by water had much of a chance of surviving the immediate impact. Today, we use many telescopes to identify and track objects found in our solar system. Still, it seems that every few months we learn of an object that could cause significant harm to the earth passing between us and the moon. One valuable use of a proposed Space Force would be to combine this detection team with a proactive defense capability, one that would be able to divert an oncoming object away from impact with earth.

The odds of an ecosystem destroying impact is very low. But our solar system has another kind of risk to throw at us, and this risk is probably orders of magnitude more likely than an asteroid’s impact. That is, we could have a solar flare that would wreak havoc upon our electric grid, causing large portions of the world to instantly regress back to stone-age conditions. Our sun is huge, and we still don’t understand the physics of how large-scale eruptions can throw off millions of tons of charged particles from the sun’s surface into space. If the eruption is large enough, and if it is aimed at Earth, it will hit us. We would have a mere two to three days warning. Would we be able to power down our electrical grid before it hit, causing catastrophic damage to our wiring and transformer base? Is there a way to shield these huge transformers so that they would survive? For it is a known fact from physics that if wires are present when electrically charged particles flow past, voltage will be induced in the wires. And transformers are nothing but masses of wire windings, aimed at either stepping up or stepping down voltages. The last major solar storm that reached the Earth happened in 1859. At that time, only telegraph wires were strung across the countryside to give us an idea of what will happen with a much more wired world. In the 1859 flare, telegraph operators reported receiving electrical shocks from the induced voltages. Telegraph wires sparked and caused fires. And all of this happened with single wires carrying low-voltage electricity.

Were we to have such an event today, the damage would be catastrophic. Overloaded wires will cause transformers to blow. Not just the local ones on the poles that step voltage down to household level, but the huge ones that work with the high voltages used to transfer electricity across the country. These transformers are huge, there are insufficient spares available to restore service should it be required across a large swath of any country, and the available manpower to fix the grid is lacking. Look how long it took to restore service to Puerto Rico after a massive failure of their grid. It would be much worse with a massive solar flare. Thus here is another area where we need to invest manpower in preventive activity, and much of that manpower must be well-versed in electrical engineering and physics. More than just manpower though, we must also invest in spare parts, and stage these transformers in locations where they can be moved to where they are needed. Given the economic model for utilities where state regulators must approve any rate increases due to the investment of a utility, it will take a real awakening of the world to this risk factor to convince those in power to grant rate increases for a danger that may come tomorrow, but may not show for 100 years. Those who pay electrical bills will not understand prudent risk avoidance when it raises their electrical bills unless there is a huge effort made to teach the public about this risk.

 

 

Chemicals I Have Made – Hydrogen Peroxide

hydrogen peroxide

It’s such a cute, cuddly chemical. Found in its brown plastic container in medicine cabinets across the world, it is poured on cuts and scrapes where it foams up in bubbles. Safe enough to be used as a mouth rinse. Good old 3% hydrogen peroxide! But let me assure you, what is safe at 3% strength, is not safe at 35% concentration. Or at 70% strength. Hydrogen peroxide, or H202 , is a chemical that must be given a great deal of respect. In my career, I worked in a process that made H202 for several years, and I’ve seen examples of its power.

When tank cars were loaded with H202, the hoses would still contain some of the liquid in the lines. There was an attitude that since this was not an organic material, and since the decomposition products were water and oxygen, it was not worthwhile to ensure that the last drops were purged out of the line. So a metal box was filled with steel scraps, metal shavings, and other pieces of metal with a high surface area. This box was used to decompose the peroxide before it ran into our cypress-lined trench system. On one occasion, significantly more peroxide ran down into the box than was intended, and not all of the peroxide decomposed before it entered the tar-covered cypress trench. Decomposition continued, and the heat released along with the enriched oxygen environment inside the trench, actually caused the trench to begin smoldering. The fire alarm was sounded, and the investigation showed that the fire was essentially caused – by water. That is the power inherent in industrial strength H202.

Before I worked at the plant, they had a specialized still that concentrated peroxide to 90% purity. That strength was used as a rocket fuel, and as a propellant for torpedoes. I never heard of any stories about accidents with that grade, but it would take very little in order to release the energy found in that strong of a chemical. After I left the Memphis Plant, I heard about something that happened to a tank car outside of the plant. Tank cars for peroxide were made of about 1/2″ thick aluminum. One night, a tank car essentially exploded, opening up the top like a pop can. The thought is that someone playing with a rifle, shot the tank car. There is a little organic material that sits atop commercial grade H202, which reacted to form organic peroxides. The energy from a rifle shot caused the organic peroxide to detonate, which triggered the release of the oxygen from the decomposing peroxide. I saw the car on a trip back to the plant. It clearly showed that there is a lot of energy available with 70% H202. I have searched diligently on the internet but I can find no on-line evidence of this incident.  One can only imagine what would have happened if this incident occurred after 9/11.

The process for making H202 is complex. An organic solution called working solution is the key to creating the H202 molecule, which then recycles to begin the process again. The working solution first enters the hydrogenators, where hydrogen gas contacts a catalyst of palladium chloride coated out as palladium metal on alumina particles. The palladium chloride comes in a solution form in 5 gallon pails, costing multiple thousands of dollars per pail. After the catalyst is filtered out, the working solution goes into the oxidizers, where air is blown through the solution. Hydrogen grabs onto the oxygen, and forms H202, which then is extracted with water, and concentrated in distillation stills. The working solution then returns and is ready to run through the loop once more.

That is a highly simplified version of the process. In practice, there is art involved. The active chemicals in the working solution can degrade over time. Therefore it is necessary to divert a side stream of working solution to flow through alumina, where the impurities that form in the hydrogenation step absorb onto the alumina. The whole process with the catalyst and the hydrogenation step is labor intensive, and it is always necessary to withdraw a portion of the catalyst and replace with fresh catalyst. To prevent that expense, and to achieve higher yield, the plant I worked at had invested in what is called a fixed bed hydrogenation system. This had shown impressive results in lab-scale testing, and in pilot plant testing, where 5-gallon sized vessels were used to prove the effectiveness before you built a 1000-gallon facility for commercial production. The new commercial facility was commissioned, and put in service.

But problems developed very rapidly. Even though the pilot plant testing did not show it, the commercial scale facility developed some hot spots inside the hydrogenator. This caused the active compound in the working solution to degrade much more rapidly than inside of the fluid bed hydrogenators. Since the investment in the working solution was several million dollars, it became imperative to find some way to reverse the damage. Lab work was expedited, and a solution was identified. They needed some engineer to manage the project and get the equipment ordered, installed, and functioning. I was plucked from the cyanide unit(see  Chemicals I have made – Hydrogen Cyanide ) and put in charge of the project.

It was a true baptism into project management. I got to travel to see the vessel that we were buying in the fabrication shop, up in the extreme northwest corner of New Jersey. There you were more likely to see a black bear than to see a Joisey girl. But the best part of the project was that I got to install and program a Programmable Logic Controller (PLC). Now this was back in 1980, and these were brand new toys  tools that used all of the advances in semi-conductors that were available. You could replace a whole rack of single-function logic switches, with a single unit that could do nearly unlimited functions. I had a lot of fun learning the ladder logic that went with this, and getting the system to work as intended. We started up our treatment unit – and it didn’t solve the problem. The working solution was still getting degraded, even when the fixed bed unit was operated at only a fraction of its intended production rate. The equipment I installed was abandoned, and the large fixed bed unit was shut down and eventually dismantled. But I had learned valuable skills and had managed a significant project by myself.

The manufacture of H202 is not different by chemical manufacturers. At the time I worked to make H202, all manufacturers used the process I described. Eventually, the unit I worked at was sold to another company in exchange for one of the other companies processes. I left H202 when I got a promotion to be a process supervisor for the manufacture of acrylonitrile. But that’s another story for another time.

 

Why Bother With Those Pesky People?

Delivery drone

I have seen the future, and in it I have totally freed myself from the need of having to interact with anyone at more than a superficial level. I need not familiarize myself with the grocery clerks at the local supermarket, because I have a service that takes my on-line order, and delivers it into my house when I’m not around since I gave access to my house through Amazon.

No longer do I need to go outside and venture into a restaurant for a meal. Instead, I can scroll through menu listings from hundreds of restaurants and select a meal, then wait for the service to come deliver it directly to my door. Still have to interact with a delivery person though – can’t wait for drone delivery to come so I don’t have to interact with anyone.

I never see my friends because I am so busy keeping up with my facebook friends, my twitter following, my instagram buds, who has the time to keep up face to face. Besides, if I went to see someone, I’d have to change out of my robe and slippers. No, online connections are so much better than having to put up with actually interacting with others.

Gas stations? Who needs gas stations? You can pay someone to deliver a set amount of gas to your car. Use Yoshi, and arrange for weekly fill-ups at work or home. It only costs a small delivery cost in addition to your gas purchase. Of course, I haven’t interacted with someone at a gas station for a long time since they put credit card readers out at the pumps. And I can arrange for someone to come to my car and do my basic maintenance, like oil changes. Pretty soon I won’t have to use a car at all.

Unless I really need some money to pay for all of these delivery charges I’m racking up. Then I can use my car and drive for the ride services. Yes, I do have to deal with the people I carry, but I never have to interact with a supervisor or co-workers. I don’t even have a supervisor, and I don’t know who my co-workers are, since we are all contractors.

I can’t remember the last time I set foot in one of our local stores. In fact, I was surprised the last time I drove through one of the shopping areas in town. Looks like most of the stores are either closed or are having going out of business sales. Well, no wonder. They can’t compete on convenience with ordering things on line and receiving it within one or two days. I do wonder what I’m going to do with the mountain of cardboard I’m accumulating.

My insurance company has this great new service. If I have a cold or some sort of minor issue, I don’t have to go to a doctor. I can connect with the service and go through a series of questions, then have a brief conversation with a doctor, and then they will arrange for an antibiotic to be delivered to my door for my sinus infection. How wonderful! You do know that 67% of all communicable diseases are transmitted through doctor’s offices. Not having to go out  – that’s wonderful.

I can get my dog walked, even if I’m home. Just have to pay that service. One thing I haven’t gotten rid of though – still have to make it to a vet. No remote app for that – yet.

I’m living the good life.

 

Note: It seems like the purpose of most technology advances and technology business offerings is to eliminate the need to interact with other individuals. Soon we’ll flit through life like dragonflies, unaware of any other life form. Maybe we hook up and have a brief fling in the air, but then it’s over and we can fly off to our doom unbothered by any other human contact.

Scholarly articles are written pondering whether technology is fueling depression and loneliness. I don’t need a graduate degree in sociology to enable me to say, hell yes it is, and the race to the bottom is accelerating. Just look at how many folks check out of the moment where they are, and look at their phone to catch up with the latest text or facebook post. I’m sitting in a choir rehearsal, and if there’s a break of more than 30 seconds, my neighbor pulls out his phone and gets an update. I will admit, I have looked for a sports score sometimes, but I’m not guilty of seeking constant status updates.

With the social media movement, business has finally found something more addictive than slot machines. We the users gleefully allow ourselves to be parsed, analyzed, and monetized for our commercial exploitation. We voluntarily expose our natures and our most personal thoughts and expressions, and release it willingly, just so we can see how many likes we got on our last post.

You know, I’m really amazed that Twitter expanded their character limit recently. With the ongoing shortening of the national attention span, I figured they’d cut it down to 100 characters (and you can have 20 additional emoticons in order to make up for the loss of bandwidth in cutting the character limit). How many folks have the time to read 280 characters! Sad!

 

Why so close? Chemical plants and oil refineries, and water.

chemistry-2363248__340

Chemicals, oil, and water are linked eternally in a faustian bargain. In order to produce most chemicals, and all petroleum products, it is necessary to have access to immense quantities of water. Thus, the infrastructure for these industries is found in the low-lying areas alongside of rivers, and within the inlets and bays along the coastline of the oceans. When the inevitable floods happen, the potential for releases of chemicals and oil, and even explosions as seen in Crosby Texas this week can and will occur.

Why is there this dependence on huge quantities of water? In order to make many chemical reactions occur, it is necessary to provide heat. That heat normally comes in the form of steam. Steam is also used to enable separations of chemicals through distillation. The tall columns seen in chemical plants and refineries are usually distillation towers, where products and wastes are drawn off at various levels in the towers. These products must then be condensed, and they are condensed in heat exchangers with water being used to cause the vapors to condense. The chemicals and the water don’t mix in these condensers, since they are found on opposite sides of the heat exchangers. But immense quantities of water are used in heat exchangers, and the water is thus warmed, reducing its effectiveness in condensing and cooling chemicals.

The water used in heat exchangers and condensers may only be used once. This is single-use water and it is necessary to have a large volume of water nearby in order to release the warmed water without adverse ecological impact. If the water is reused, then it is necessary to cool the water back down in order to use it again. This is done in cooling towers, and you normally will see the plumes of water vapor coming up from these large structures, where water is cooled through evaporation as it drips on down through the wooden framework of a cooling tower. Cooling towers increase the concentration of salts in the water, since a portion of the water is lost to evaporation and may have many cycles through the cooling tower before being discarded to a body of water.

Since it takes lots of energy to move large quantities of water, and lots of money to run long lengths of piping, most chemical plants are found just adjacent to the water. They are sited so that they are above the normal flooding levels, but when unprecedented flooding happens like with Harvey, they are supremely vulnerable to damage from water. In my career in the chemical industry, I worked at two plants (in Tennessee and in West Virginia) that were situated along rivers. The plant in Tennessee did have problems long after I left when flooding from the Mississippi caused backwater flooding that buried part of the plant, which was situated on a smaller feeder stream. Fortunately, it didn’t cause the release of chemicals, and was not a large problem, but it highlights how close proximity to water comes with its own set of risks.

I have been to plants in Texas that were totally inundated from the floods this week. One along the end of the Houston Ship channel, that immense concentration of oil and chemical plants along Texas 225. The other was in Beaumont, situated right next to the marshlands leading to the Gulf of Mexico. The facilities at these plants are designed to be safe and to be able to be shut down without causing chemical releases. But. There are limits to what you can do and still be safe. When you have feet of floodwaters covering a site, then the power of the water can do things that cannot be controlled. Water can erode pipe supports, and the dangling piping will bend and break, releasing the contents of the lines. Floodwaters can shove vehicles and boats into pumps and piping, causing them to break. Even in the normal process of shutting down facilities, excess venting and flaring of flammable and toxic compounds can happen, which can cause irritation and concern among the neighbors of these facilities.

Just as there is a faustian bargain between these facilities and water, there is another relationship that comes into play. That is the relationship between the workers and their families, and their proximity to the plant. Very often the workers for these facilities are found in the neighborhoods surrounding the plants. Entire generations of workers have grown up nearly in the shadow of the towers of refineries and chemical plants. This is especially true in the region around the Houston Ship Channel. The towns of La Porte, Pasadena, Deer Park, and Baytown have a symbiotic relationship with their industrial behemoths. Only a single road separates the residential areas from the properties of the oil and chemical companies. Quite literally, the companies and the towns are all in the same boat at times like now.

The plant that had the explosions this week was a different type of chemical plant. This plant was not adjacent to a large body of water. What it manufactured was a chemical that is essential in the manufacture of plastics, but by its own nature, it was extremely unstable. In my chemical plant in West Virginia, we also manufactured a similar material. These materials are known as polymerization initiators, and they make it possible for chemicals like ethylene (two carbons bound by double bonds) to react with each other, and form long chains that we know as plastics (polyethylene). The materials we produced in West Virginia also have to be kept refrigerated or they will grow unstable and catch fire. Part of the lore of the plant involved the time when the manufacturing line for this material had a problem, and the temperature rose to the point where the chemical decomposed and ignited. That fire was remembered long after everyone who worked during the fire had left the plant. What made the situation in Texas worse, was that the organic peroxides they made are not only flammable but are explosive when they decompose.

Part of the manufacturing process for chemical plants involves process hazards reviews. In these reviews, the participants go through a systematic review of the inherent hazards of the process and facilities, and determine if there were adequate safeguards to prevent incidents and injuries. Sometimes a significant hazard is discovered, one that had not been previously considered, and then the management of the plant faces the task of getting the fix done to remove the hazard. Since it takes time to implement new facilities (and get the authorization to spend the money to build facilities), normally there are administrative controls that are put in place to temporarily mitigate the risks. But even though I participated in many process hazards reviews in my career, I do not remember ever having considered the case of having my plant submerged in multiple feet of floodwater, and having no way to get anything working for days at a time. I imagine that the chemical and refining industries will have to go through substantial work trying to come up with new safeguards that will prevent releases and explosions such as are being seen in Texas now.

Nuclear Energy Doesn’t Have To Be Scary

nuclear-power-plant-837823_960_720

 

Quick – can someone tell me what potential source of energy could single-handedly provide all of the energy requirements of the US for the next 1000 years? And at the same time, foster independence in rare earth materials that are mainly sourced from China? And would not generate carbon dioxide as it is consumed?

No, it’s not coal. Coal can provide a significant portion of energy needs, and coal ash is a prospective source of rare earth metals that may be harvested, but it creates a huge amount of CO2 and has other detrimental effects, like the mountain top removal that is a blight in my home state of West Virginia. (Point of personal perspective – over 40 years ago, I had a part-time job as a chemist for a concrete company. There was a new coal fired power plant coming on line in Nebraska, and the concrete company was considering using coal ash as an extender for cement in making concrete. I performed wet chemical analysis of fly ash, going through most of the metals by reacting with reagents, then precipitating out various compounds and evaporating them to dryness in platinum crucibles. The reaction stream went all the way to sodium, which had to be precipitated with some uranium salt. I had fun doing that work, but the one thing I remember is that if you hit a fly with a stream of acetone used to dry dishware, you would cause the fly to drop out of the air instantly, and a dehydrated fly husk would be all that was left behind.)

Give up on the original question? It’s thorium, the radioactive material that has a 14 billion year half life. Thorium, along with uranium, was looked at by the US government when nuclear power and nuclear weaponry were uppermost in the minds of the government. But one factor weighing against thorium, turns out to be now a very beneficial factor. See, it is nigh unto impossible to obtain any nuclear weapon grade material out of the thorium fission reaction process. Uranium reactors will create plutonium as one of the natural byproducts of fission. If U238 (the most common uranium isotope in reactor fuel) absorbs a neutron, it becomes an extremely unstable isotope U239, which eventually transmutes into a plutonium isotope. Since U238 is the primary isotope in a pressurized water reactor, the spent fuel rods from a reactor will always contain plutonium. That is one reason why fuel rod security is required, since plutonium can be chemically separated from the toxic mix of radioactive stew found in a fuel rod.

Thorium, though, does not create plutonium. It does create the uranium isotope U233  which is the active fuel for a thorium reactor.  It also creates small quantities of another uranium isotope U232 which acts as a poison against creating a fission bomb out of U233 . So the issues of nuclear weapon proliferation by segregating out fissile material from thorium fuel are not of concern.

(Paragraph of translation. If you understand the concepts of isotopes, please skip over this paragraph.  Now, the last paragraph used a bit of physics jargon that is necessary to understand this post. I mentioned Uranium 232 (U232)  and Uranium 233 (U233).   Both of those refer to the element uranium, which has 92 protons. Where they differ is in the number of neutrons. Uranium 233 has one more neutron than does uranium 233. An element may have many different numbers of neutrons, and that is especially true in the heaviest (by atomic number) elements. These different isotopes have remarkably different characteristics, especially when dealing with nuclear reactions. This is important in the discussion below.

In order to appreciate the difference between a thorium reactor, and the current pressurized water reactors (PWR) using uranium, it is necessary to discuss PWR’s. The current nuclear power industry takes fuel rods and inserts them into the core of the reactor. A reactor contains control rods of neutron-absorbing materials. These control rods are raised and lowered in order to moderate the nuclear fission occurring in the fuel rods. If the control rods were raised totally out of the core, the fuel rods would not be able to hold the runaway nuclear reaction that would ensue. The zirconium cladding of the fuel rods would melt, and the contents of the fuel rods would pool at the bottom of the reactor. This event would not be good, putting it mildly.

When a PWR works properly, it heats water which is kept pressurized in the primary coolant loop. The water is circulated into a steam generator, where the steam is created which runs the electrical generators. Fuel rods in a PWR have a limited lifespan, and once they are no longer useful to generate electricity, these rods must be pulled out and stored in water to handle the immediate heat generated from nuclear reactions still continuing in the fuel rods. In some cases, the rods are held in water pools for 10-20 years. Then the rods must be kept secure and eventually stored somewhere where they will be able to stay segregated from the environment for geologically significant periods of time (hundreds of thousands of years). It is only after that much time has gone by before the spent fuel rods no longer pose a threat to health.

So, with uranium as the source fuel, you can generate enormous amounts of energy without CO2 generation, but with huge potential issues. You have extremely complex systems operating at incredible pressures and temperatures that must keep operating in order to prevent a runaway reaction. Then, if all works right, you have to take the fuel rods out after only a few percent of the potential energy is released, since the fission byproducts work to poison the reaction long before all of the uranium has fissioned.  And then you must isolate the fuel rods for hundreds of thousands of years, or else there is a risk of radioactive contamination of the environment. No wonder nuclear power is viewed with disfavor.

Thorium would be significantly different, though. Thorium reactors can use a molten salt as the liquid that would carry the thorium, the U233, and all fission products coming from the nuclear reactions. This means that the operating pressures of the system are far lower than in a PWR, reducing the potential for leakage or cracking of the containment system. And if the liquid salt does leak out, what would happen? It would freeze in place. Indeed, the possibility of a reactor core meltdown disappears with a thorium-fueled reactor.

It is essential to separate out the fission products and radioactive isotopes generated in this type of reactor. This can be done by taking a small side stream of the circulating salt solution, and using standard chemical separation techniques to return the unburned U233 to the salt solution. Other radioactive isotopes can be removed using this method. Because the fuel can be recycled indefinitely until it is burned, the remaining daughter fission products (the lighter elements remaining from a uranium nucleus that has fissioned) have a much shorter half life. Instead of hundreds of thousands of years for spent fuel rods to become harmless, it will only take a couple of hundred years for the fission byproducts from thorium to decay to a harmless state.

One thing that thorium has going for it is 4 times more plentiful in the earth’s crust than uranium. And it’s primary ore is one that includes rare earth metals and phosphate. So a commercial mining operation aimed at recovering thorium will also produce rare earth metals, and phosphate for fertilizer. All of these materials are essential for our modern economy.

Granted, any process involving radioactive materials has risks, and even though a thorium-fueled liquid salt reactor is simpler than a PWR, there are many challenges concerning the commercialization of this technology. But if we as a society are concerned with trying to develop energy sources that do not produce CO2 , and can serve as baseload power generators, certainly thorium reactors are a technology that should actively be researched. To think, we may have had the answer to our energy dilemmas in hand over 60 years ago, only to throw it away since thorium doesn’t lend itself to making good bombs (an oxymoron of the first degree).  Let’s try to rectify humanity’s mistake and work to investigate and commercialize this amazing resource we have been given from our earth.

Pay Me Now, or Pay Me Later! Guess What? It’s Now Later!

Computer desk IBM 360 Desk Console

Want to cut down on the size and ineffectiveness of the Federal government? If so, then you will need to shell out significant dollars to replace the decades-old IT systems that the government uses for many of its programs. And you will need to rework many of the procurement practices and political machinations that have hamstrung efforts to update IT systems in the past.

It is not a secret that the IRS is at the rear of the organizations that are updating their IT systems. Two of the main systems for the IRS are IT antiques dating back over 50 years ago, running on IBM mainframes, with programming that is written in assembly language code. There have been requests to modernize the systems involved, but since the IRS is viewed as anathema to the Republicans dominating Congress, the trend over the past decade has been to cut IRS spending, not upgrade the systems. I actually remember IBM mainframes – the IBM 360 was the workhorse of the university computing systems at our school. The fact that essential government functions still run on a similar system now should bring shame to any who care about efficient government services. Indeed, it appears that up to $60 billion per year across the Federal government is being spent trying to nursemaid these antiquated systems through yet another day.

Not only does the government incur substantial costs for keeping these antiques running, it cannot achieve the efficiencies in service delivery that are possible if we use modern computer systems. I worked for over 20 years for my company installing and upgrading our business enterprise software. Our system was SAP, and in the early 1990’s I began work at a chemical plant implementing the mainframe version of this system. Beginning in 1999, I worked full time on SAP implementation for our department, and I understand the complexity involved in uprooting existing systems and implementing brand new business processes. The period immediately before and after go-live was always traumatic and stressful. But it is only after going through these efforts that it is possible to reap the benefits of improved IT. The increase in direct IT support costs is greatly outweighed by the reductions in support staff at the plants and in central offices. Not only are overall costs lowered, but the information that comes from such a system is up to date and accurate. When I began working at a plant, it took a clerk in each process in a plant multiple days to assemble the information needed for monthly cost reporting. These reports were circulated in a preliminary form among the management of the process, and eventually they were issued. Then the plant accountant would assemble all of the overhead cost sheets, and the allocated costs would be figured. All of this meant that cost information was never current, always subject to significant revisions, and provided only a snapshot once a month.

By the time I retired in 2015, cost data was available instantaneously for all products, including labor costing and allocated overheads. The manpower was greatly reduced at a site, the information was better, and managers could focus on factors within their control instead of trying to manipulate the reports to put their operations in a better light.

The Federal government cannot achieve the efficiencies that private industry has achieved, because the impetus to upgrade IT systems has not been sufficient to enable the departments to get the funds to implement the upgrades. In fact, lately this effort has gone in the opposite direction. According to the Government Accounting Office (GAO), Operations & Maintenance spending on IT systems has been rising year by year since 2012, while spending for modernization and development has declined. From fiscal years 2010 to 2017, such spending has decreased by $7.3 billion.

Even when funds are appropriated for upgrades, current procurement practices preclude efficient implementation. I am aware of an effort to implement a portion of business enterprise software for the army. Supposedly the contract for this project was approved in late 2016. However, due to the nature of government procurement, a competitor who was unsuccessful in the bidding process appealed the awarding of the contract. It has been six months, and there has not been any update on the resolution of the situation. Meanwhile, those employees who would have been assigned to the project are awaiting actual productive work at the government contractor. Such delays lead to projects running behind schedule and much above budget.

One reason why the funding has decreased for modernizing IT systems has been the sequester process for budgeting. With funding for discretionary spending flattened by decree, it has been increasingly difficult to gain support for funding for IT improvements. But for fiscal conservatives, it should be a primary goal to ensure that if the government must spend tax dollars, they should do it in a cost-effective manner, and in such a way that overall government employees could be reduced. Unfortunately, this approach has not reached the top 10 list of the Grover Norquist acolytes who view any increase in expenditure from a government agency as sacrilege.

Since the current administration is full of folks with business experience, maybe these types of modernization efforts may finally gain traction. This is one area where I do find agreement with the priorities of the Trump administration. This past week’s gathering of tech business executives with the administration did discuss IT modernization. My fear is that in this administration’s pogrom against discretionary spending, once more we will fall further behind the IT curve. Future archeologists will excavate data centers complete with mainframes and tape drives intact, and will marvel that these relics maintained their usefulness long after they had been abandoned by the world of business.