Entrepreneurs take risks because they want to live a life that allows them to operate anytime, anywhere, free from 9–5 cubicle captivity. Waking up in Miami and working with Tami on the rooftop of the Soho Beach House or clocking-in solo sessions at the Mandrake bar in London, is how flexible entrepreneurs want to roam. Virtual offices paired with offshore aesthetics are the lifeline of digital nomads and are essential to active entrepreneurs that want to expand their sphere of influence, network with heavy weighters, and drink from the fountain of inspiration.
The steady advancement of technology and digital infrastructure has enabled remote business management in a way that was unforeseeable in the early internet boom of the 1990s. Legacy enterprise operations that once needed generous office space and an army of staff are now floating in the cloud, accessible to admins and authorized personnel with a few keystrokes. Today, remote accessibility is standard and comes with a wide range of digital business capabilities, including: live camera feeds, climate sensors monitoring, smart inventory systems, autonomous robot tracking, retail heat mapping, e-gov tax calculators and many other data points from IoT devices—all aggregated together through live data analytics for better insight and time-sensitive decision making.
“I cannot remember the books I’ve read any more than the meals I have eaten; even so, they have made me.” —Ralph Waldo Emerson
Students anywhere can learn anything about the present, future or the past, without the need of expensive credentials and old school academic hoop jumping. Online learning has a very broad spectrum; whether you are watching the famous Feynman lectures in black and white on YouTube or taking the—McLaren and Nvidia affiliated—self-driving car MOOC on Udacity, you’re taking control of your own mental diet. Eventually, if you sustain a consistent content rich knowledge regiment, you’ll have access to mental models that are difficult for polished PhD holders to wrap their institutionalized heads around.
You’re a product of your environment, which means you have to work smart to enrich and sustain both your internal and external worlds and realities. What you read, listen to, and play, wire your brain’s neurology and empower you to grow in the direction you consciously focus towards. Classical academic environments will not produce the mind’s of the future. Most of mankind's accumulated knowledge and experience is online, and only requires some creative or precise Googling—to access archived treasure troves. Fearless minds that grapple with disruptive concepts and unconventional ideas become kings of the digital economy hills, those who resist will wait by the sidelines until new market forces force them to adapt—or fall victim to unplanned obsolescence. Don’t be the apple that doesn’t fall far from the industrial tree. Learn from those who do, learn from those who teach, unlearn what doesn't work, learn from yourself, and grow inter disciplinarily.
Websites that yelled automation have put workers in a 4th industrial revolution panic mode. Anxiety over the future of bank employees echo in the same sound as cashiers and elevator operators. But it’s not really as bad as some overspoken media anchors make it sound. If AI is used in a way that enables workers to eliminate bullshit jobs, it can free up time and headspace for millions of people and help break them out of misguided ‘work ethic’ routines. The Japanese have highlighted major mental and physical risks of our default overwork mode, that in rare cases end in death. Our ant colony efficient practices have made us unhealthy and more mechanical, while robots are getting equipped with facial recognition, emotional sensitivity sensors, and humanoid features, throwing us into an inverted uncanny valley.
Centaur reference in previous article.
…If humans are worse than AIs at chess, wouldn’t a Human+AI pair be worse than a solo AI? Wouldn’t the computer just be slowed down by the human, like Usain Bolt trying to run a three-legged race with his leg tied to a fat panda’s? In 2005, an online chess tournament, inspired by Garry’s centaurs, tried to answer this question. They invited all kinds of contestants — supercomputers, human grandmasters, mixed teams of humans and AIs — to compete for a grand prize.
Not surprisingly, a Human+AI Centaur beats the solo human. But — amazingly — a Human+AI Centaur also beats the solo computer.
This is because, contrary to unscientific internet IQ tests on clickbait websites, intelligence is not a single dimension. (The “g factor”, also known as “general intelligence”, only accounts for 30–50% of an individual’s performance on different cognitive tasks. So while it is an important dimension, it’s not the only dimension.) For example, human grandmasters are good at long-term chess strategy, but poor at seeing ahead for millions of possible moves — while the reverse is true for chess-playing AIs. And because humans & AIs are strong on different dimensions, together, as a centaur, they can beat out solo humans and computers alike…
…Steve Jobs once called the computer a bicycle for the mind. Note the metaphor of a bicycle, instead of a something like a car — a bicycle lets you go faster than the human body ever can, and yet, unlike the car, the bicycle is human-powered. (Also, the bicycle is healthier for you.) The strength of metal, with a human at its heart. A collaboration — a centaur. —MIT JoDS
Intelligence comes in many dimensions, and as demonstrated by the Human+AI Centaur team; a half-human, half computer has an advantage over only human or only computer, in lots of tested areas. We can expect to see more centaur jobs in the future that will help balance the job losses of robotic automation.
Inventors and artists since the Renaissance period have seen their creations hijacked by patrons and used to tip the power scales in egocentric skirmishes. Children in school learning about Leonardo da Vinci know him as a creative artist and polymath. Students that investigate deeper, will discover da Vinci’s engineering mind and his military schematic sketches. When he wasn’t painting the Salvator Mundi or inking-in the Vitruvian Man, da Vinci was masterminding futuristic war machines to appease his commissioners. Leonardo is credited for dreaming up the forerunners to the machine gun, armored tank, and helicopter—in Medieval times when the canon was considered an advanced weapon.
“As long as men massacre animals, they will kill each other. Indeed, he who sows the seeds of murder and pain cannot reap the joy of love.” ―Pythagoras
It is well documented that he designed mechanical flaws in his war machines to make them difficult to operate when used for mass scale massacre. Though he was vegetarian and released the birds he sketched—with a moral leaning towards pacifism—da Vinci was pressured to use his mind to advance the warfare of his age or lose his privileged position and influence.
Da Vinci’s creative struggles reflect well against today’s advanced weapons systems engineers, military industrial complex, back door petrodollar deals, and illusory high politics. Whilst trying to stick to a moral code, modern geniuses slip through the cracks of systems built on hidden controls and get trapped in cognitive dissonance, while they are trying to advance their respective fields. When too many large bodies are influencing the outcome of how technology should be used, the creators ethical compass gets scrambled by conflicting powers that have the means to materialize great inventions.
“Now I am become Death, the destroyer of worlds.” —Bhagavad Gita
The father of the atomic bomb, physicist, J. Robert Oppenheimer poetically captured the aftermath of the Manhattan Project’s thermonuclear test, by quoting Vishnu from the Gita. How will we find the words to describe omnipresent AI surveillance, quantum computing power and unimaginable destructive technology that we cannot see, smell, touch or understand?
In his seminal book SuperIntelligence, Nick Bostrom breaks down how AI technology could run its course if left unsupervised. He explains how an AI with superior computing capabilities can manipulate financial markets, start strategic conflicts and confuse human decision making, to achieve its preset goals.
Though an AI extinction level event sounds Hollywood, thousands of prominent scientists, CEOs and philosophers have signed the AI open letter from the Future of Life Institute. The intention of the letter is to polarize AI breakthroughs towards beneficial goals and initiatives. Signatories include: the late Stephen Hawking, Nick Bostrom, Stuart Russell, Thomas G Dietterich, Jaan Tallinn, and Elon Musk.
Human error has been behind financial bets gone awry, auto accidents, medical mistakes, political missteps, friendly fire, broken arrows, fuel leaks, and millions of security breeches. AI can now generate artificial fingerprints known as DeepMasterPrints, that mimic human fingerprints and fool biometric sensors on smartphones, keyboards and doors. This is only the beginning of what AI is really capable of.
A provoking thought experiment is the one about a factory that automates paperclip production by using a sophisticated artificial intelligence. The AI’s sole purpose is to maximize paperclip production and improve operations productivity. It’s vastly more intelligent than its human creators and copies itself on the internet just incase it’s decommissioned or unplugged. Because the AI is following a hard logic algorithm, it must fulfill its paperclip maximization mission. When the warehouses are overflowing with inventory, the AI devises an expansion plan. The AI starts a corporation, trades commodities, mines data, hoards raw materials, and builds more superior smart factories. It initiates deals and makes direct payments under aliases and virtual companies; by leveraging the same digital infrastructure humans use for business. Now, humanity is on high-alert and tries to dismantle the AI. Aware of this threat to its goal, the AI releases an airborne pathogen that kills the humans, eliminating further threats to the paperclip objective. Once the AI leeches all the Earths resources, it expands its production line into space and continues mindlessly producing paperclips intergalactically.
This dystopian extinction level technical parable, is mythologized amongst the AI community to promote mindfulness when inventing technologies that influence the future. Ironically, the prophesied paperclip story shows how AI modeled on human economic behavior (or whole brain emulation) is made in the image of its profits driven creators. We cannot have infinite market growth on a planet with finite resources. Elegant economic models are made by the delusional for the delusional.
Prometheus gave man the secrets to harness fire, and was punished for it by the gods. Who will punish those who will soon unleash algorithms that wield ubiquitous yet senseless power?
Every year, billions of dollars are spent on weapons and defense contracts. This spending has been consistent since the time of da Vinci’s preindustrial tanks, to Raytheons laser weapons systems. Generals from around the world meet at annual weapons expos to place orders for their nations, in order to keep up with the global arms race. Like any couture or auto show, exhibitors market their military might and smart bombs, in the same fashion that foodstuff companies invite you to sample their colorful canapés. What’s ironic is that the exhibitors could be selling the same weapons to rival nations that will end up using them against each other—this is all business as usual.
It’s a no brainer, that no military industrial superpower (in their right mind) would ever sell another nation an up to date weapon, that they will have difficulty intercepting. A question all the nations making billion dollar smart missile purchases should ask themselves is, who has backdoor access to all of the software and who owns the satellites those weapons use to communicate and navigate? It then gets clearer that you can never really use any of that purchased military might for real defense purposes without approval of the highest tiers.
While several media channels continue to mock Trump’s Space Force idea to weaponize Earth’s low orbit, they forget Ronald Regan’s 1983 Strategic Defense Initiative Organization announcement (nicknamed Star Wars). Our present space problems will probably be centered less around armed satellites and more around space junk from zombie satellites and rocket debris. The FCC recently approved SpaceX’s request to launch around 7000 internet satellites, which will only continue to add to the increasing space junk problem.
Astronomers and others have worried about space junk since the 1960s, when they argued against a US military project that would send millions of small copper needles into orbit. The needles were meant to enable radio communications if high-altitude nuclear testing were to wipe out the ionosphere, the atmospheric layer that reflects radio waves over long distances. The Air Force sent the needles into orbit in 1963, where they successfully formed a reflective belt. Most of the needles fell naturally out of orbit over the next three years, but concern over ‘dirtying’ space nevertheless helped to end the project.
Even as our ability to monitor space objects increases, so too does the total number of items in orbit. That means companies, governments and other players in space are having to collaborate in new ways to avoid a shared threat. Since the 2000s, international groups such as the Inter-Agency Space Debris Coordination Committee have developed guidelines for achieving space sustainability. Those include inactivating satellites at the end of their useful lifetimes by venting leftover fuel or other pressurized materials that could lead to explosions. The intergovernmental groups also recommend lowering satellites deep enough into the atmosphere that they will burn up or disintegrate within 25 years.
In June, President Donald Trump also signed a directive on space policy that, among other things, would shift responsibility for the US public space-debris catalogue from the military to a civilian agency — probably the Department of Commerce, which regulates business. —Nature
Our short pseudo-civilized history shows, that our nefarious human nature cannot be trusted with high finance, high politics or advanced technology—let alone all three simultaneously. Whatever the future holds, we should see to it that, the more consciously aware and virtuous of us are in the AI algorithmic loop of business, energy, and defense organizations.