The world was like before suffering from climate catastrophes. The eruption of Mount Etna was forgotten by most, only the people living under clouds of ash were complaining about it.
----
Nuke was spreading to its peak. It was being sold at almost every street corner and could be ordered over social media platforms.
The tiny container that held the drug Nuke resembled a saline re-moisturizer for contact lens wearers in the late 1990s. The solution was dyed red and, in some shots, had a small needle sticking out after someone removes the covering to administer the drug. It had to be injected in an artery.
The narcotic caused people to forget to eat & drink, causing a rapid increase in deaths in the world. The police were constantly dealing with small dealers and the consumers of Nuke.
The news was constantly reporting about the negative influence of Nuke or the deaths of consumers.
The UN was conflicted about the consumers.
The narcotic was so strong that deprivation of Nuke was the same as having millions of crazy or aggressive people fighting for Nuke.
It was a lost cause. The UN decided to allow police officers to shoot at any dealer or consumers the moment any danger arose. They also tried to shut down the networks, but the social networks were too many. Coupled with protests about privacy, the progress was slow and hard.
—-
On the positive side, LifeUp became internationally famous. Their product NuHealth 1.0 was just too popular. With a society that wanted to go at 100% of their mental and physical strength, it was easy to explain why its order rate was so high.
Many people wanted to invest in LifeUp, sadly it declined all proposals. This made LifeUp even more famous as everyone was looking for the reason for declining the offers. Ben naturally didn't plan to become public, so LifeUp could only remain silent.
—-
Genetic engineering has improved to the point that developed countries accepted to cure the embryo of any genetic disease. There were massive protests organized in Europe and America against the act of playing God.
In China, the editing of embryos was publicly only allowed for curing diseases, but a new market had been created because of the high demand to edit other traits like high, hair-color, etc.
As the genome-editing wasn't controlled by experts, there was often damage at the modified genome resulting in babies with disabilities.
—-
The chip of Neurolink was also sold all over the world. The endless potential of the chip and the idea of fusing humans and machines made many foreigners invest in Neurolink.
Resulting in the value of Neurolink increasing like never before.
Every month new applications were made and the downloading of information directly to the brain became very popular.
The UNO had to decide if the usage of chips was allowed for exams and how it could be controlled.
This discussion was also highly discussed on media platforms like YouTube.
—-
A device to create negative mass has been created.
In our everyday experience, if you push something, it moves away from you. But objects with negative mass would turn that basic principle on its head and accelerate towards you instead. It sounds like science fiction but the idea is possible, and its effects have been observed in recent experiments. Now, scientists from the University of Rochester have developed a device that can create and collect particles that exhibit negative mass.
Newton's Second Law of Motion says that the force on an object is equal to its inertial mass multiplied by its acceleration (F = ma). Normally all of those calculation are made with positive mass: applying positive force on an object with positive mass will result in positive acceleration, pushing the object forwards. But if an object has negative mass, the force becomes negative too, meaning the object will move in the opposite direction – essentially, pressing itself against your hand if you try to push it.
At least, that's the idea. Negative mass has mostly been demonstrated in theoretical analyses, although the experiments last year manipulated rubidium atoms with lasers to create a fluid that acted like it had negative inertial mass. Now, the University of Rochester researchers say they've developed a device that can create particles exhibiting negative mass, by combining photons from laser light and excitons in a semiconductor.
This news made many people dream about portals and other theoretical ideas.
The UNO finances a company named Departure to make portals/stargates/wormholes possible in the future. But thus far such concepts are still far from being made a reality.
The UNO count on the best AIs in the world to accelerate the progress. The most promising AI till now is a human-level Artificial Intelligence system named Dolores.
Dolores can perform analytical tasks such as simulations and predictions much better than humans. It isn't as narrow-minded and doesn't make mistakes like humans.
With its high knowledge and simulation capacity, its chances of helping in the research are huge. The main argument of the pro side is that the fusion humans with implanting chips see it as a part of themselves. There is no need to not use the full potential of the brain thanks to the chips. In jobs, the chip will also be accepted to work.
The main argument of the contra-side is that it would cause a greater disparity between rich and poor.
-----
In 2032 the pressure of being intelligent was never as high as before, causing a high interest in Nootropics. Nootropics are drugs, supplements, and other substances that may improve cognitive function, particularly executive functions, memory, creativity, or motivation, in healthy individuals. The most famous were Modafinil and alpha-PBC. This interest resulted in many trials for new nootropics, which ended to be mostly a waste of money.
But in an unknown big research labor, a new nootropic was produced, which would later impact the market greatly and also cause many deaths.
-----
Other technologies like fusion reactors, self-driving cars, and Full Dive technology are still under research but tend to be near their first realizations.
"Smart grid" technology is widespread in developed nations and will be more and more used in houses.
In prior decades, the disruptive effects of energy shocks, alongside ever-increasing demands of growing and industrializing populations, were putting a strain on the world's power grids. Blackouts occurred in the worst-hit regions, with consumers becoming more and more conscious of their energy use and taking measures to either monitor and/or cut back their consumption. This already precarious situation was exacerbated by the relatively ancient infrastructure in many countries. Much of the grid at the beginning of the 21st century was extremely old and inefficient, losing more than half of its available electricity during production, transmission, and usage. A convergence of business, political, social, and environmental issues forced governments and regulators to finally address this problem.
By 2032, integrated smart grids are becoming widespread in the developed world, the main benefit of which is the optimal balancing of demand and production. Traditional power grids had previously relied on a just-in-time delivery system, where supply was manually adjusted constantly in order to match demand. Now, this problem is being eliminated due to a vast array of sensors and automated monitoring devices embedded throughout the grid. This approach had already emerged on a small scale, in the form of smart meters for individual homes and offices. By 2032, it is being scaled up to entire national grids.
Power plants now maintain constant, real-time communication with all residents and businesses. If capacity is ever strained, appliances instantly self-adjust to consume less power, even turning themselves off completely when idle and not in use. Since balancing demand and production is now achieved on a real-time, automatic basis within the grid itself, this greatly reduces the need for "peaker" plants as supplemental sources. In the event of any remaining gap, algorithms calculate the exact requirements and turn on extra generators automatically.
Computers also help adjust for and level out peaks and troughs in energy demand. Sensors in the grid can detect precisely when and where consumption is highest. Over time, production can be automatically shifted according to the predicted rise and fall in demand. Smart meters can then adjust for any discrepancies. Another benefit of this approach is allowing energy providers to raise electricity prices during periods of high consumption, helping to flatten out peaks. This makes the grid more reliable overall since it reduces the number of variables that need to be accounted for.
Yet another advantage of the smart grid is its capacity for bidirectional flow. In the past, power transmission could only be done in one direction. Today, a proliferation of local power generation, such as photovoltaic panels and fuel cells, means that energy production is much more decentralized. Smart grids now take into account homes and businesses which can add their own surplus electricity to the system, allowing energy to be transmitted in both directions through power lines.
This trend of redistribution and localization is also making large-scale renewables more viable since the grid is now adaptable to the intermittent power output of solar and wind. On top of this, smart grids are also designed with multiple full load-bearing transmission routes. This way, if a broken transmission line causes a blackout, sensors instantly locate the damaged area while electricity is rerouted to the affected area. Crews no longer need to investigate multiple transformers to isolate a problem, and blackouts are reduced as a result. This also prevents any kind of domino effect from setting off a rolling blackout.
Overall, this new "internet of energy" is far more sustainable, efficient, and reliable. Energy costs are reduced while paving the way to a post-carbon economy. Countries that quickly adapt smart grids are better protected from oil shocks, while greenhouse gas emissions are reduced by almost 20 percent in some nations. As the shift to clean energy continues, this situation will only improve, expanding to even larger scales. Regions begin merging their grids together on a country-to-country, and eventually continent-wide, basis.
----
By 2032, a new cellular network standard has emerged that offers even greater speeds than 5G. Early research on this sixth-generation (6G) had started during the late 2010s when China, the USA, and other countries investigated the potential for working at higher frequencies.
Whereas the first four mobile generations tended to operate at between several hundred or several thousand megahertz, 5G had expanded this range into the tens of thousands and hundreds of thousands. A revolutionary technology at the time, it allowed vastly improved bandwidth and lower latency. However, it was not without its problems, as exponentially growing demand for wireless data transfer put ever-increasing pressure on service providers, while even shorter latencies were required for certain specialists and emerging applications.
This led to the development of 6G, based on frequencies ranging from 100 GHz to 1 THz and beyond. A ten-fold boost in data transfer rates would mean users enjoying terabits per second.
Furthermore, improved network stability and latency, achieved with AI and machine learning algorithms, could be combined with even greater geographical coverage. The Internet of Things, already well-established during the 2020s, now had the potential to grow by further orders of magnitude and connect not billions, but trillions of objects.
---
Zettaflop-scale computing is now available which is a thousand times more powerful than computers of 2020 and a million times more powerful than those of 2010. One field seeing particular benefit during this time is meteorology. Weather forecasts can be generated with 99% accuracy over a two-week period. Satellites can map wind and rain patterns in real-time at phenomenal resolution from square kilometers in previous decades, down to square meters with today's technology. Climate and sea-level predictions can also be achieved with greater detail than ever before, offering greater certainty about the long-term outlook for the planet.