The concept of turbine-powered automotive vehicles is not necessarily an unfamiliar idea or a technology that has yet to be explored. In fact, several prominent automakers explored this concept as early as the 1950s and 60s – with real, functional prototypes. Notably, Rover-BRM in the UK as well as Chrysler and General Motors in the US employed turbine engine programs to test the viability of such engines in the commercial market. The Chrysler turbine engine program began its research back in the late 1930s and eventually ran a public user program from September 1964 to January 1966 where a total of 55 cars were built. General Motors had tested gas turbine-powered cars with its many iterations of the Firebird in the 50s and 60s. Rover and British Racing Motors developed several prototypes of their Rover-BRM concept that actually participated in the Le Mans race three years in a row, from 1963 through 1965. However, even Chrysler, which was considered the leader of gas turbine research in automobiles, had to eventually abandon their program in 1979 after seven iterations of the turbine engine. Many of the initial issues with heat control and acceleration-lag were improved during the program’s lifetime, but the program had never paid off in the retail automotive sector, and its continued development was deemed too risky for Chrysler at the time.
Several decades later, we are seeing a resurgence of turbine motors in automobiles, but now serving as a range extender generator for electric vehicles instead. As with many upcoming technologies, learning from past research and failed historical attempts can bring light to the most elegant and innovative solutions for today’s modern challenges. This revolution of an old concept shares many of the qualities that made turbine engines attractive back in its initial development phase. Such advantages include the ability to run on any flammable liquid and the high power density that results in a significantly lower weight and size contribution than its piston engine counterpart.
Steam turbine technology has advanced significantly since it was first developed by Sir Charles Parson in 1884 . The concept of impulse steam turbines was first demonstrated by Karl Gustaf Patrik de Laval in 1887. A pressure compounded steam turbine based on in de laval principle was developed by Auguste Rateau in 1896. Westinghouse was one of the earliest licensee for manufacturing steam turbines obtained from Sir Charles Parson and became one of the earliest Original Equipment Manufacturers (OEM) in power generation and transmission.
Over the years, as steam turbine technology advanced, the design principles were based on either impulse type or reaction type with reaction type being more efficient. Though impulse was not as efficient as reaction type, it gained popularity due to lower cost and compact size. With advances in design and optimization methods being employed, the efficiency levels between these two types are not very distant, ranging between 2 – 5% based on the size and application. Read More
Global warming and the growing demand for energy are two primary problems rising in the power generation industry. A simple solution to these problems has been researched for a number of years. The SCO2 Brayton cycle is often looked into as an alternative working fluid for power generation cycles due to its compactness, high efficiency and small environmental footprint. The usage of SCO2 in nuclear reactors has been studied since the early 2000s in development of Generation IV nuclear reactors, but the idea itself can be traced back to the 1940s. During this time however, no one really looked into the potential of supercritical CO2 since steam was found to be efficient enough, not to mention it was the more understood technology when compared to SCO2. In modern times though, demand of more efficient energy continues to rise and with it, the need for SCO2.
The potential of supercritical CO2 implementation is vast across power generation applications spanning nuclear, geothermal and even fossil fuel. The cycle envisioned is a non-condensing closed loop Brayton cycle with heat addition and rejection inside the expander to indirectly heat up the carbon dioxide working fluid. Read More
People are pushing turbine inlet temperature to extremes to achieve higher power and efficiency. Material scientists have contributed a lot to developing the most durable material under high temperatures such as special steels, titanium alloys and superalloys. However, turbine inlet temperature can be as high as 1700˚C  and cooling has to be integrated to the system to prolong blade life, secure operation and achieve economic viability.
A high pressure turbine can use up to 30% of the compressor air for cooling, purge, and leakage flows, which is a huge loss for efficiency. It is worth it only if the gain of turbine inlet temperature can outweigh the loss of cooling. This applies to both aviation engines and land based gas turbines.
The history of turbine cooling goes back 50 years and has evolved to fit different environments. The diversity of turbine cooling technology we see today is just the tip of the iceberg. As time goes on and technology advances, people are able to achieve higher cooling efficiency at lower coolant usages. For different goals and needs, different constructs can be applied but the detailed cooling design must balance with the whole system and make the most of technological advances in the areas. For example, if the flow path is optimized, mechanical design is modified, or if new material is employed, the cooling design needs to change accordingly. One thing worth mentioning is that manufacturing of hot section components and turbine cooling design have an interdependent cause and effect, outpacing and leading each other to new levels. Merging of disciplines and additive manufacturing will, in the future, bring more flexibility to turbine cooling design.
Often, service companies are faced with the challenge of redesigning existing pumps that have failed in the field with extremely quick turnaround times. While there are quick-fix methods to return these pumps into operation, other more complex problems may require taking a step back and analyzing how this particular pump could be redesigned based on its current operation. These engineering upgrades could solve recurring issues with failure modes of a certain machine, and they could also solve new capacity demands that are imposed by a customer based on their system’s upstream or downstream changes. While efficiency increases could be beneficial to the overall system, many times it is more important to solve capacity requirements and increase the life of the pump by decreasing the Net Positive Suction Head Required (NPSHr).
In this blog post, we will investigate how to move an existing centrifugal pump through the AxSTREAM platform in order to solve engineering challenges seen on common OEM pump upgrades. With the use of AxSTREAM’s integrated platform and reverse engineering module, many of the CAE tasks that are common in an analysis such as this one can be realized in record speed. The first step of the reverse engineering process occurs in obtaining the necessary geometrical information for the desired pump. Through AxSLICE, the user can take an STL, IGES, CURVE file, or a generated cloud of points and properly transform this 3D profile into a workable geometry inside the AxSTREAM platform. In a matter of minutes, the user can outline the hub and shroud and transform a blank 3D profile into a profile defined by a series of segments. Seen in Figure 1, the centrifugal pump is now defined by a hub, shroud, and intermediate section.
Because the most vital part of a refrigeration and HVAC system is to function optimally, compressors are used to raise the temperature and pressure of the low superheated gas to move fluid into the condenser. Consequently, refrigeration compressors must be properly maintained through regular maintenance, testing and inspection. There are a couple conditions which would indicate compressor problem or failures. However, with the right supervision it is possible to avoid further damage. Through this post we will identify and discuss some of these conditions: Read More
The helicopter is a sophisticated, versatile and reliable aircraft of extraordinary capabilities. Its contribution to civil and military operations due to its high versatility is significant and is the reason for further research on the enhancement of its performance. The complexity of helicopter operations does not allow priority to be given for any of its components. However, the main engine is key for a successful flight. In case of engine failure, the helicopter can still land safely if it enters autorotation, but this is dictated by particular flight conditions. This article will focus on the possible threats that can cause engine failure or deteriorate its performance.
When a helicopter is operating at a desert or above coasts, the dust and the sand can challenge the performance of the engine by causing erosion of the rotating components, especially the compressor blades. Moreover, the cooling passages of the turbine blade can be blocked and the dust can be accumulated in the inner shaft causing imbalance and unwanted vibration. The most common threat of this kind is the brownout which is caused by the helicopter rotorwash as it kicks up a cloud of dust during landing.
Historically turbomachinery development began with empirical rules postulated by early pioneers. With the need for jet engine for aircraft propulsion, dimensionless analysis became popular, followed by the 1 D mean line design and 2D meridional methods. Today 2D meridional methods with 3D blade to blade CFD/FEA methods are a necessity as efficiency and reliability requirements are further pushed.
One key aspect of 2D meridional design is S1-S2 optimization, which is a time consuming, laborious task and hence subject to human errors. S1-S2 optimization is a task of reviewing, adjusting and optimizing the flow path in the Tangential (S1 or blade-to-blade or pitchwise) and the Meridional (S2 or span wise) planes. The main purpose is to:
Fit the flow path to specific meridional dimensional constraints
Adjust blade-to-blade parameters while taking into account structural constraints.
– Input a set of boundary conditions, geometrical parameters and constraints that are known to the user.
Step 2: Design space generation
– Thousands of machine flow path designs can be generated from scratch
– Explore a set of design solution points using the Design Space Explorer
– Adjusting geometric parameters while retaining the desired boundary conditions is also possible
Optimization (or parametric studies) of a twin spool bypass turbofan engine with mixed exhaust and a cooled turbine can be considered one of the most complex problem formulations. For engine selection, determining the thrust specific fuel consumption and specific thrust is necessary against variables such as design limitations (Inlet temp, etc.), design choices (fan pressure ration, etc.) and operating conditions (speed & altitude). The task involves cycle level studies following machine, module, stage and component level optimization. This calls for an integrated environment (IE) and it is desirable to have such an IE operating on a “single” platform.
Historically IE was developed for the design of axial turbines (mainly steam). Later, it was expanded for gas turbines (especially blade cooling calculations) and axial compressors via plug-in modules. The new challenge designers face today is developing mixed flow machinery. An effective system for modern turbomachinery design needs to do the following: