Tel 01904 415 415

Fax 01904 436 540

Cyclops Electronics uses cookies to ensure that we give you the best experience on our website. For optimal performance please accept cookies. For more information please visit our cookies policy.

Accept and close

Component Search

Blog

RSS Feed

Showing posts for 2022


18 May 2022

The Angstrom Era of Electronics

board-g775f85a76_1920

Angstrom is a unit of measurement that is most commonly used for extremely small particles or atoms in the fields of physics and chemistry.

However, nanometres are almost too big for new electronic components, and in the not-so-distant future angstrom may be used to measure the size of semiconductors.

It could happen soon

Some large firms have already announced their future plans to move to angstrom within the next decade, which is a huge step in terms of technological advancement.

The most advanced components at the moment are already below 10nm in size, with an average chip being around 14nm. Seeing as 1nm is equal to 10Å it is the logical next step to move to the angstrom.

The size of an atom

The unit (Å) is used to measure atoms, and ionic radius. 1Å is roughly equal to the diameter of one atom. There are certain elements, namely chlorine, sulfur and phosphorus, that have a covalent radius of 1Å, and hydrogen’s size is approximately 0.5Å.

As such, angstrom is mostly used in solid-state physics, chemistry and crystallography.

The origin of the Angstrom

The name of the unit came courtesy of Anders Jonas Ångström, who used the measurement in 1868 to chart the wavelengths of electromagnetic radiation in sunlight.

Using this new unit meant that the wavelengths of light could be measured without the decimals or fractions, and the chart was used by people in the fields of solar physics and atomic spectroscopy after its creation.

Will silicon survive?

It’s been quite a while since Moore’s Law was accurate. The methodology worked on the theory that every two years the number of transistors in an integrated circuit (IC) would double, and the manufacturing and consumer cost would decrease. Despite this principle being relatively accurate in 1965, it does not take into account the shrinking size of electronic components.

Silicon, the material used for most semiconductors, has an atomic size of approximately 2nm (20Å) and current transistors are around 14nm. Even as some firms promise to increase the capabilities of silicon semiconductors, you have to wonder if the material will soon need a successor.

Graphene, silicon carbide and gallium nitride have all been thrown into the ring as potential replacements for silicon, but none are developed enough at this stage for production to be widespread. That said, all three of these and several others have received research and development funding in recent years.

How it all measures up

The conversion of nanometres to angstrom may not seem noteworthy in itself, but the change and advancement it signals is phenomenal. It’s exciting to think about what kind of technology could be developed with electronics this size. So, let’s size up the angstrom era and see what the future holds.

Tags: angstrom nanometres semiconductors atoms moore’s law silicon


11 May 2022

What are GaN and SiC?

new electronic component image

Silicon will eventually go out of fashion, and companies are currently heavily investing in finding its protégé. Gallium Nitride (GaN) and Silicon Carbide (SiC) are two semiconductors that are marked as being possible replacements.

Compound semiconductors

Both materials contain more than one element, so they are given the name compound semiconductors. They are also both wide bandgap semiconductors, which means they are more durable and capable of higher performance than their predecessor Silicon (Si).

Could they replace Silicon?

SiC and GaN both have some properties that are superior to Si, and they’re more durable when it comes to higher voltages.

The bandgap of GaN is 3.2eV and SiC has a bandgap of 3.4eV, compared to Si which has a bandgap of only 1.1eV. This gives the two compounds an advantage but would be a downside when it comes to lower voltages.

Again, both GaN and SiC have a greater breakdown field strength than the current semiconductor staple, ten times better than Si. Electron mobility of the two materials, however, is drastically different from each other and from Silicon.

Main advantages of GaN

GaN can be grown by spraying a gaseous raw material onto a substrate, and one such substrate is silicon. This bypasses the need for any specialist manufacturing equipment being produced as the technology is already in place to produce Si.

The electron mobility of GaN is higher than both SiC and Si and can be manufactured at a lower cost than Si, and so produces transistors and integrated circuits with a faster switching speed and lower resistance.

There is always a downside, though, and GaN’s is the low thermal conductivity. GaN can only reach around 60% of SiC’s thermal conductivity which, although still excellent, could end up being a problem for designers.

Is SiC better?

As we’ve just mentioned, SiC has a higher thermal conductivity than its counterpart, which means it would outlast GaN at a higher heat.

SiC also has more versatility than GaN in what type of semiconductor it can become. The doping of SiC can be performed with phosphorous or nitrogen for an N-type semiconductor, or aluminium for a P-type semiconductor.

SiC is considered to be superior in terms of material quality progress, and the wafers have been produced to a bigger size than that of GaN. SiC on SiC wafers beat GaN on SiC wafers in terms of cost too.

SiC is mainly used for Schottky diodes and FET or MOSFET transistors to make converters, inverters, power supplies, battery chargers and motor control systems.

Tags: silicon gallium nitride silicon carbide semiconductors compound semiconductors sic gan raw material wafers schottky diodes mosfet transistors converters inverters power supplies battery chargers motor control systems.


04 May 2022

semiconductors in space

flight-sky-earth-space

A post about semiconductors being used in space travel would be the perfect place to make dozens of space-themed puns, but let’s stay down to earth on this one.

There are around 2,000 chips used in the manufacture of a single electric vehicle. Imagine, then, how many chips might be used in the International Space Station or a rocket.

Despite the recent decline in the space semiconductor market, it’s looking likely that in the next period there will be a significant increase in profit.

What effect did the pandemic have?

The industry was not exempt from the impact of the shortage and supply chain issues caused by covid. Sales decreased and demand fell by 14.5% in 2020, compared to the year-on-year growth in the years previous.

Due to the shortages, many companies within the industry delayed launches and there was markedly less investment and progress in research and development. However, two years on, the scheduled dates for those postponed launches are fast approaching.

The decline in investment and profit is consequently expected to increase in the next five years. The market is estimated to jump from $2.10 billion in 2021 all the way up to $3.34 billion in 2028. This is a compound annual growth rate (CAGR) of 6.89%.

What is being tested for the future

In the hopes of ever improving the circuitry of spaceships there are several different newer technologies currently being tested for use in space travel.

Some component options are actually already being tested onboard spacecrafts, both to emulate conditions and to take advantage of the huge vacuum that is outer space. The low-pressure conditions can emulate a clean room, with less risk of particles contaminating the components being manufactured.

Graphene is one of the materials being considered for future space semiconductors. The one-atom-thick semiconductor is being tested by a team of students and companies to see how it reacts to the effects of space. The experiments are taking place with a view to the material possibly being used to improve the accuracy of sensors in the future.

Two teams from the National Aeronautics and Space Administration (NASA) are currently looking at the use of Gallium Nitride (GaN) in space too. This, and other wide bandgap semiconductors show promise due to their performance in high temperatures and at high levels of radiation. They also have the potential to be smaller and more lightweight than their silicon predecessors.

GaN on Silicon Carbide (GaN on SiC) is also being researched as a technology for amplifiers that allows satellites to transmit at high radio frequency from Earth. Funnily enough, it’s actually easier to make this material in space, since the ‘clean room’ vacuum effect makes the process of epitaxy – depositing a crystal substrate on top of another substrate – much more straightforward.

To infinity and beyond!

With the global market looking up for the next five years, there will be a high chance of progress in the development of space-specialised electronic components. With so many possible advancements in the industry, it’s highly likely it won’t be long before we see pioneering tech in space.

To bring us back down to Earth, if you’re looking for electronic components contact Cyclops today to see what they can do for you. Email us at sales@cyclops-electronics.com or use the rapid enquiry form on our website.

Tags: semiconductors space travel chips graphene national aeronautics space administration nasa gallium nitride satellites


27 April 2022

What alternatives to WiFi are available?

wifi

WiFi has been an integral part of our life since the 90s when it first came into being. Originally created for wireless connections in cashier systems under the name WaveLAN, the trademarked WiFi name came into existence just before the turn of the century and hasn’t looked back since.

Alongside WiFi, cellular internet was also thriving, giving people the power to connect to a network through a phone signal. The current rollout of 5G shows that this method of connecting to the internet is also still very popular and getting more advanced by the year.

But since the conception of these two types of communications, several new methods have also been designed, and may be contenders to replace them in future.

How does WiFi work?

WiFi stands for Wireless Fidelity and uses radio waves to transmit signals between devices. The frequency is in the Gigahertz range, as opposed from Kilohertz and Megahertz for AM and FM radio respectively. This is why every iteration of cellular internet has a ‘G’ after it, because the frequency range for WiFi is between 2.4GHz and 5GHz.

But, as with all things, there are limitations to WiFi’s capabilities. Many current devices can’t yet use 5G as they weren’t built to support it, and 2.4G is now so congested it is almost always unusable too.

LiFi

This WiFi alternative, known as Light Fidelity, was first announced in 2011 during a TED Global Talk by Professor Harald Haas where he demonstrated it for the first time. The system uses light instead of radio waves, so lightbulbs can create a wireless type of network.

Despite the term being first coined by Haas, CSO of PureLiFi, several companies have since introduced products with strikingly similar names that also use light. This type of communication is called Optical Wireless Communications (OWC), which encompasses communications using infrared, ultraviolet and visible light.

Satellite WiFi

Starlink is just one example out of the category of satellite WiFi. The SpaceX subdivision uses a network of private satellites positioned across the globe to provide internet access. Currently the company has around 2,000 working satellites orbiting the planet.

Although this is already an established form of internet access, especially in rural areas, the investment in developing this technology and its versatility makes it a contender for the monopoly on WiFi in the future.

Mesh Networking

Mesh networks are often used as an extension to a regular WiFi home connection. The short-range network uses two modulation techniques, Binary and Quadrature Phase-shift Keying (BPSK and QPSK). This makes the mesh network devices act like high-speed Ultra-wideband ones.

The system works on the principle that you install nodes, like mini satellites, throughout your house. The nodes all act as stepping stones, which means the WiFi signal at any point in your house will be much stronger than if you only had one central router.

The fibre-optic future

With the recent advent of 5G and the increasing availability of faster WiFi thanks to tech like fibre optic broadband, it’s unlikely it will go out of fashion very soon. But it’s always nice to have a bit of choice, isn’t it?

One huge benefit that comes with the internet is being able to find electronics component suppliers at high speed. Whether you’re on satellite WiFi, cellular, or LiFi, contact Cyclops Electronics today at sales@cyclops-electronics.com to see how we can help you.

Tags: wifi cellular internet wireless fidelity radio waves radio lifi satellite wifi mesh networking satellites the fibre-optic future


14 April 2022

How transistors replaced vacuum tubes

transisor vacuum

Electronics has come on leaps and bounds in the last 100 years and one of the most notable changes is the size of components. At the turn of the last century mechanical components were slowly being switched out for electrical ones, and an example of this switch was the vacuum tube.

A lightbulb moment

Vacuum tubes were invented in the early 1900s, and the first ones were relatively simple devices containing only an anode and a cathode. The two electrodes are inside a sealed glass or aluminium tube, then the gas inside would be removed to create a vacuum. This allowed electrons to pass between the two electrodes, working as a switch in the circuit.

Original vacuum tubes were quite large and resembled a lightbulb in appearance. They signalled a big change in computer development, as a purely electronic device replaced the previously used mechanical relays.

Aside being utilised in the field of computing, vacuum tubes were additionally used for radios, TVs, telephones, and radar equipment.

The burnout

Apart from resembling a bulb, the tubes also shared the slightly more undesirable traits. They would produce a lot of heat, which would cause the filaments to burn out and the whole component would need to be replaced.

This is because the gadget worked on a principle called thermionic emission, which needed heat to let an electrical reaction take place. Turns out having a component that might melt the rest of your circuit wasn’t the most effective approach.

The transition

Transistors came along just over 40 years later, and the vacuum tubes were slowly replaced with the solid-state alternative.

The solid-state device, so named because the electric current flows through solid semiconductor crystals instead of in a vacuum like its predecessor, could be made much smaller and did not overheat. The electronic component also acted as a switch or amplifier, so the bright star of the vacuum tube gradually burned out.

Sounds like success

Vacuum tubes are still around and have found a niche consumer base in audiophiles and hi-fi fanatics. Many amplifiers use the tubes in place of solid-state devices, and the devices have a dedicated following within the stereo community.

Although some of the materials that went into the original tubes have been replaced, mostly for safety reasons, old tubes classed as New Old Stock (NOS) are still sold and some musicians still prefer these. Despite this, modernised tubes are relatively popular and have all the familiar loveable features, like a tendency to overheat.

Don’t operate in a vacuum

Transistors are used in almost every single electronic product out there. Cyclops have a huge selection of transistors and other day-to-day and obsolete components. Inquire today to find what you’re looking for at sales@cyclops-electronics.com, or use the rapid enquiry form on our website.

Tags: electronics vacuum tube anode cathode transistors electronic component amplifiers


13 April 2022

Carbon nanotubes being used to develop ‘Smart Clothes’

Graphene

Since the discovery of carbon nanotubes (CNTs) in 1991, the material has been utilised for commercial purposes in several areas, including anti-corrosion paints, hydrophobic coatings and engineering plastics.

CNTs were one of the materials that made it possible for two-dimensional graphene to be used and researched. On a broader scale, it allowed nanoscience to branch into its own area of study.

The material is made up of a cylindrical tube of carbon atoms, and can be single-walled or multi-walled. On a molecular level, CNTs are 100 times more robust than steel and a fraction of the weight.

But in the last ten years, there have been studies into how the material’s heat and electrical conductive qualities might be used in another everyday product: clothes.

Keeping warm

A recent study by North Carolina State University examined CNTs’ usage as a ‘smart fabric’ in 2020. The researchers investigated how its heating and cooling properties could be harnessed to make a cheaper alternative to the current thermoelectric materials being used.

The plan is to integrate the CNTs into the fabric of the clothes, rather than an extra layer, which means the flexible material has an advantage over others currently available on the market.

The low thermal conductivity of CNTs means that heat would not travel back to the wearer, and the same applies to cool air, when an external current is applied.

Heart racing yet?

 A study from seven years previously studied how CNTs could be used as a built-in electrocardiogram (ECG) within athletic wear. The nanotube fibres sewn into the clothes monitored heartrate and took a continual cardiogram from the wearer.

The Brown School of Engineering lab, who conducted the research, said the shirt would have to be a tight fit to make sure the material touched the skin, but the t-shirt was still – miraculously – machine-washable.

According to the researchers the enhances shirt actually performed better than a chest-strap monitor ECG when compared in a test, and could connect to Bluetooth devices to transmit the collected data.

Recharging…

In 2018 engineers from the University of Cincinnati, in partnership with the Wright-Patterson Air Force Research Laboratory, conducted a study into how CNT clothes could charge a phone.

This study investigated the applications of CNT clothes in the military, where it could be used to charge the electronics that form part of a soldier’s field equipment instead of weighty batteries. Using a similar technique to the other studies, where CNT fibres were sewn into the clothes.

Will it make fashion week?

Not quite yet. Despite the cheaper-by-comparison cost of the material, the quantity of material required for mass production is too high for what is currently available and is still relatively young and untested. The specialist equipment that would also be needed for CNT textile production would be an investment many manufacturers would decide against.

While CNTs may not be a hugely sought-after material just yet, Cyclops can supply you with hard-to-find electronic components when you need them most. Contact us now at sales@cyclops-electronics.com to see how we can help you.

Tags: smart clothes carbon nanotubes cnts nanoscience


06 April 2022

The tech behind the touch screen

touchscreen

Ever wondered how the touchscreen on your phone actually works? It’s such an integral part of our lives and, despite the fast advancements, the technology is still relatively new.

The first touch screen was invented in 1965 by E. A. Johnson and used a type of technology that is still widely used in touch screens today: capacitive.

Different types of touchscreen technology

The two main types of touch screens used are capacitive and resistive, which work in slightly different ways.

Capacitive

This tech, normally used for devices like smartphones and tablets, by making your finger part of the circuit (not in a weird way!). There are layers of glass coated with a conductive material like copper. When a smartphone user touches the screen, your finger is used as a conductor and completes the circuit.

When the ‘touch event’ occurs, the device detects the electrical charge at the location the connection is made, and the screen’s receptors can then pass the signal onto the operating system which will respond.

This is why smartphones don’t really work when you’re wearing gloves, because the conductor is blocked by an insulator. However, devices like styluses and and specially designed gloves are designed at the same level of conductivity, which is why they will work.

Resistive

Touch screens that need more durability, like the screens on ATMs and self-checkouts, are usually resistive rather than capacitive. In this type of touch screen, a glass or plastic layer is covered by a resistive layer that conducts charge. When a point is pressed on the plastic layer, the two layers touch and change electrical charge at that point.

The downside to this type of touch screens is it won’t detect more than one touch, unlike the capacitive equivalent.

Other types

There are several other types of touch screens, including ones that use infrared LEDs and photodetectors, optical sensors, and even ones that use friction and acoustic signals. However, none of these variations are used as widespread as capacitive or resistive.

Components of a touchscreen

The touch screen technology itself is comprised of around 4 layers.

The top layer, or the layer that smartphone users interact with, is the cover lens. This top layer is what we can see and what we interact with.

The next layer is the touch sensor, which is a clear panel made from glass or plastic with an electric current running through it. When a touch is registered the current causes a voltage change, which is sensed by a small microcontroller-based chip called the touch controller, that can determine the location of the touch.

Under the touch sensor is the display, usually a liquid crystal display (LCD) or active-matrix organic light emitting diode (AMOLED) technology.

The final layer to the touch screen is the software, which interprets the signals transmitted to it and can form a response.

In front of the screen

Technology is currently being developed that will even take the ‘touch’ out of touch screen, which can predict the target a consumer is aiming at without them touching the screen. The tech uses AI and sensors to avoid a user needing to physically touch the screen, and so takes out the risk of any bacteria or pathogens being spread via the surface.

So, in a future where we may be more careful of what surfaces we touch, we will wait with anticipation for what touch screen technology could bring.

For peace of mind, Cyclops screen and quality test all the components we stock. For a trustworthy supplier with more than 30 years of experience, get in touch with Cyclops today at sales@cyclops-electronics.com

Tags: touchscreen capacitive resistive smartphones tablets copper electrical charge infrared led ai


30 March 2022

The process of making silicon semiconductors

IC Component

As the global shortage of semiconductors (also called chips) continues, what better time is there to read up on how these intricate, tiny components are made?

One of the reasons the industry can’t catch up with the heightened demand for chips is that creating them takes huge amounts of time and precision. From the starting point of refining quartz sand, to the end product of a tiny chip with the capacity to hold thousands of components, let’s have a quick walkthrough of it all:

Silicon Ingots

Silicon is the most common semiconductor material currently used, and is normally refined from the naturally-occurring material silicon dioxide (SiO₂) or, as you might know it, quartz.

Once the silicon is refined and becomes hyper pure, it is heated to 1420˚C which is above its melting point. Then a single crystal, called the seed, is dipped into the molten mixture and slowly pulled out as the liquid silicon forms a perfect crystalline structure around it. This is the start of our wafers.

Slicing and Cleaning

The large cylinder of silicon is then cut into very fine slices with a diamond saw, and further polished so they are at a perfect thickness to be used in integrated circuits (ICs). This polishing process is undertaken in a clean room, where workers have to wear suits that will not collect particles and will cover their whole body. Even a single speck of dirt could ruin the wafers, so the clean room only allows up to 100 particles per cubic foot of air.

Photolithography

In this stage the silicon is covered with a layer of material called a photoresist, and is then put under a UV light mask to create the pattern of circuits on the wafer. Some of the photoresist layer is washed away by a solvent, and the remaining photoresist is stamped onto the silicon to produce the pattern.

Fun fact – The yellow light often seen in pictures of semiconductor fabs is in the lithography rooms. The photoresist material is sensitive to high frequency light, which is why UV is used to make it soluble. To avoid damaging the rest of the wafer, low frequency yellow light is used in the room.

The process of photolithography can be repeated many times to create the required outlines on each wafer, and it is at this stage that the outline of each individual rectangular chip is printed onto the wafer too.

Layering

The fine slices are stacked on top of each other to form the final ICs, with up to 30 unique wafers being used in sequence to create a single computer chip. The outlines of the chips are then cut to separate them from the wafer, and packaged individually to protect them.

The final product

Due to this convoluted, delicate process, the time take to manufacture a single semiconductor is estimated to take up to four months. This, and the specialist facilities that are needed to enable production, results in an extreme amount of care needing to be taken throughout fabrication.

If you’re struggling to source electronic components during this shortage, look no further than Cyclops Electronics. Cyclops specialises in both regular and hard-to-find components. Get in touch now to see how easy finding stock should be, at sales@cyclops-electronics.com.

Tags: shortage semiconductors chips silicon ingots slicing and cleaning photolithography layering


23 March 2022

The History of Transistors

semiconductor 2

Transistors are a vital, ubiquitous electronic component. Their main function is to switch or amplify the electrical current in a circuit, and a modern device like a smartphone can contain between 2 and 4 billion transistors.

So that’s some modern context, but have you ever wondered when the transistor was invented? Or what it looked like?

Pre-transistor technology

Going way back to when Ohm’s Law was first discovered in 1820s, people had been aware of circuits and the flow of current. As an extension of this, there was an awareness of conductors.

Following on from this, semiconductors accompanied the birth of the AC-DC (alternating current – direct current) conversion device, the rectifier, in 1874.

Two patents were filed in the 20s and 30s for devices that would have been transistors if they had ever reached past the theoretical stage. In 1925 Julius Lilienfeld of Austria-Hungary filed a patent, but did not end up releasing any papers regarding his research on the field-effect transistor, and so his discoveries were ignored.

Again, in 1934 German physicist Oskar Heil’s patent was on a device that, by applying an electrical field, could control the current in a circuit. With only theoretical ideas, this also did not become the first field effect transistor.

The invention of transistors

The official invention of a working transistor was in 1947, and the device was announced a year later in 1948. The inventors were three physicists working at Bell Telephone Laboratories in New Jersey, USA. William Shockley, John Bardeen and Walter Brattain were part of a semiconductor research subgroup working out of the labs.

One of the first attempts they made at a transistor was Shockley’s semiconductor triode, which was made up of three electrodes, an emitter, a collector and a large low-resistance contact placed on a block of germanium. However, the semiconductor surface trapped electrons, which blocked the main channel from the effect of the external field.

Despite this initial idea not working out, the issue was solved in 1946. After spending some time looking into three-layer structures featuring a reversed and forward-biased junction, they returned to their project on field-effect devices in a year later in 1947. At the end of that year, they found that with two very close contact junctions, with one forward biased and one reverse biased, there would be a slight gain.

The first working transistor featured a strip of gold over a triangle of plastic, finely cut with a razor at the tip to create two contact points with a hair’s breadth between them and placed on top of a block of germanium.

The device was announced in June of 1948 as the transistor – a mix of the words ‘transconductance’, ‘transfer’ and ‘varistor’.

The French connection

At the same time over the water in France, two German physicists working for Compagnie des Freins et Signaux were at a similar stage in the development of a point contact device, which they went on to call the ‘transistron’ when it was released.  

Herbert Mataré and Heinrich Welker released the transistron a few months after the Bell Labs transistor was announced but was engineered completely without influence by their American counterpart due to the secrecy around the Bell project.

Where we are now

The first germanium transistors were used in computers as a replacement for their predecessor vacuum tubes, and transistor car radios were produced all within only six years of its invention.

The first transistor was made with germanium, but since the material can’t withstand heats of more than 180˚F (82.2˚C), in 1954 Bell Labs switched to silicon. Later that year Texas Instruments began mass-producing silicon transistors.

First silicon transistor made in 1954 by Bell Labs, then Texas Instruments made first commercial mass produced silicon transistor the same year. Six years later in 1960 the first in the direct bloodline of modern transistors was made, again by Bell Labs – the metal-oxide-semiconductor field-effect Transistor (MOSFET).

Between then and now, most transistor technology has been based on the MOSFET, with the size shrinking from 40 micrometres when they were first invented, to the current average being about 14 nanometres.

The latest in transistor technology is called the RibbonFET. The technology was announced by Intel in 2021, and is a transistor whose gate surrounds the channel. The tech is due to come into use in 2024 when Intel change from nanometres to, the even smaller measuring unit, Angstrom.

There is also other tech that is being developed as the years march on, including research into the use of 2D materials like graphene.

If you’re looking for electronic components, Cyclops are here to help. Contact us at sales@cylops-electronics.com to order hard-to-find or obsolete electronic components. You can also use the rapid enquiry form on our website https://www.cyclops-electronics.com/

Tags: transistors electronic component conductors semiconductors ac-dc texas instruments


16 March 2022

Ukraine - Russia conflict may increase global electronics shortage

ukraine

Due to conflict between Russia and Ukraine, both of whom produce essential products for chip fabrication, the electronic component shortage across the globe may worsen.

Ukraine produces approximately half of the global supply of neon gas, which is used in the photolithography process of chip production. Russia is responsible for about 44% of all palladium, which is implemented in the chip plating process.

The two leading Ukrainian suppliers of neon, Ingas and Cryoin, have stopped production in Moscow and said they would be unable to fill orders until the fighting had stopped.

Ingas has customers in Taiwan, Korea, the US and Germany. The headquarters of the company are based in Mariupol, which has been a conflict zone since late February. According to Reuters the marketing officer for Ingas was unable to contact them due to lack of internet or phone connection in the city.

Cryoin said it had been shut since February 24th to keep its staff safe, and would be unable to fulfil March orders. The company said it would only be able to stay afloat for three months if the plant stayed closed, and would be even less likely to survive financially if any equipment or facilities were damaged.

Many manufacturers fear that neon gas, a by-product of Russian steel manufacturing, will see a price spike in the coming months. In 2014 during the annexing of Crimea, the price of neon rose by 600%.

Larger chip fabricators will no doubt see smaller losses due to their stockpiling and buying power, while smaller companies are more likely to suffer as a result of the material shortage.

It is further predicted that shipping costs will rise due to an increase in closed borders and sanctions, and there will be a rise in crude oil and auto fuel prices.

The losses could be mitigated in part by providing alternatives for neon and palladium, some of which can be produced by the UK or the USA. Gases with a chlorine or fluoride base could be used in place of neon, while palladium can be sourced from some countries in the west.

Neon could also be supplied by China, but the shortages mean that the prices are rising quickly and could be inaccessible to many smaller manufacturers.

Neon consumption worldwide for chip production was around 540 metric tons last year, and if companies began neon production now it would take between nine months and two years to reach steady levels.

Tags: chip fabrication electronic component neon gas chip production palladium ingas cryoin shipping costs


Component Search

Step 1

Enter Electronic Component part number below.

Step 2

Click the button below.
It’s that easy.

Cyclops Electronics Ltd, Reg. No. SC128862, VAT No. GB561633447

Registered Office: Allan House, 25 Bothwell Street, Glasgow G2 6NL

(Registered in the United Kingdom)

© 2022 Cyclops Electronics

Website by See Green