In this day and age of consumerism, it is so easy to buy into parroted beliefs slung by the left and right, of “there is no ethical consumption under capitalism” or that “poor people need to pull themselves up by their own bootstraps.” With tensions rising and mounting between those on the left and right–as the left adopts an anti-work mentality whereas the right installs anti-homeless architecture, we start to see why there is growing tension between the members of the economic left and the economic right. While there is some things to be stated about the other aspects of these political ideologies–be it forgiving dictators of countries one didn’t belong to, or outright animosity towards entire classes of people who simply exist and cannot control their existence, I here aim to primarily focus on this idea the left and right seem to adopt of “capitalism bad, communism good” or vice versa that is so prevalent amongst the members of our society is an intrinsically outdated ideology in a day of virtually infinite, free information in ways the forefathers of these ideas couldn’t have even begun to consider as a potential reality, leaving room to entirely rehash what “left” and “right” even mean in our day of eternal connectivity and endless information.
It is worth noting that Adam Smith himself stated “A man must always live by his work, and his wages must be at least sufficient to maintain him,” in the very book that defined the modern definition of capitalism, The Wealth of Nations. It was stated by the father of capitalism himself, that capitalism inherently requires a living wage and a healthy workforce (although Smith did not directly comment on the idea of “minimum wage”) to even function, while Marx goes on to critique from within Das Capital that the accretion of wealth unchecked will lead to irreversible inflation that could undermine the foundation of the entire capitalistic model. Adam Smith further does support the idea of unions, so long as the interests are mutual with the capital, and not in mutiny of the capital. Therein lies an “us vs them” mentality from both sides of the dialectic, of capital vs labor. Now Marx goes on to further dismantle capitalism in Das Capital, but Marx failed to realize what was to come just a century after his works were published, with the burgeoning boom of technology creating a form of interconnectedness and near limitless access to information that was the stuff of fantasies just decades prior to today.
In the day and age of information, what with search engines like Google and Bing being veritable Libraries of Alexandria that offer the sum total of human knowledge at literally anyone’s fingertips for free (even if one does not own a device, largely having access to one’s own public library allows one to learn just about anything for free.) With Google transforming into a powerhouse as a cloud a mere decade or so after its inception, and Microsoft following suit with a search engine model, the rise in free tutorials all over the web was unprecedented, particularly on platforms like YouTube, by both independent YouTubers (I myself only learned to code with the utterly precious Bob Ross of programming known as Daniel Schiffman) and entire degree audits published by major universities such as MIT (with MIT OCW) and Stanford (Coursera) publishing free or next to free oceans of information available to just about anyone with the push of a button. What with K-12 free learning platforms like Khan Academy existing as well, and what with major tech companies such as Google, Microsoft, and Amazon highly pushing certifications and GitHub projects over one’s college degree (this article of data comes from an insider I know, although you better be damn good if you don’t have a degree or at least have a partial degree to compensate–but your luck is pretty decent if you are an existing Computer Science or Information Technology track student with your cert and GitHub route), well-paying jobs are becoming more and more accessible to just about anyone with an internet access.
With the recession looming and daring to push us into the Great Depression: Electric Boogaloo, several companies all across the board are causing massive layoffs that strike fear into people’s eyes about the depths of the recession and how they are going to manage to live within the confines of this recession. We must ask ourselves an essential question: If it is effectively, and essentially entirely free to audit an entire lifetime’s worth of education for virtually free or next to free, and with the cost of most lifechanging certifications not more than forgoing two week’s of Starbucks, why aren’t more people spending time to educate themselves? Part of it may be doomer mentality–and this idea that we see most commonly amongst Millennials and Gen Z, that we are all effectively on a sinking ship–the ones that think everything is fine are on the nose end of the Titanic hoisted high in the air ignorant of the drowning ship pulling everyone down. Now while this seems to currently be the case regardless of who is in charge–as unregulated anything, be it capitalism, communism, or Pastafarianism, can lead to disaster, as there isn’t any one answer in any one solution that wouldn’t lead to the ballasts of our ship being too biased in one direction or another.
This is why we must abandon this old idea of “left” versus “right” and accept that there is virtue in both ideas, and that adopting a framework of “communism for one’s Maslow needs, and regulated and checked capitalism outside of that” will allow for one’s very basic, very primal needs to be taken care of–i.e. those needs that are required for one’s basic stasis. One might bring up UBI, but UBI is a poor solution, as most people are poor mitigators of their own finances. Instead of UBI, an accreditation system that allows one to have very basic shelter, nutritional food and clean water, and (in this modern day and age) basic access to the internet (even with administration limits set to limit social media hours and to encourage access to educational materials), as well as basic guaranteed access to a Primary Care Physician (PCP), as well as the reduction of drug stigmatization and availability of recovery programs, to help streamline struggling people towards having the boots to pull the straps of in the first place. One cannot pull straps if they do not have boots, and it is up to us having basic compassion towards others to ensure they have the boots to strap themselves up to help them move forwards.
By providing nonmonetary support towards these people in the form of social assistance and social programs, and providing resources for these people to educate themselves and find jobs for them to move outwards and forwards. Even Marx himself stated, “Those who are able to work, but refuse to do so, should not expect society’s support,” of which I have noticed many anti-work leftists conveniently ignore. At the same token, the right conveniently ignores Adam Smith’s insistence on the importance of unions and a living wage and treating your workers fairly (as I have provided resources to earlier). Marx’s theory assumes a post-scarcity society, whereas Adam Smith implores the importance of capitalistic supply and demand metrics in a society reliant on scarcity. Now while on a material level we do exist in a scarce society (although we have more than enough resources to feed and house all humans, should we adopt a less wasteful mindset–the specific source of this stat will be in the lighthearted embedded video at the end of this article), we also live in a day and age that neither Adam Smith nor Marx could’ve ever envisioned, in an era where information and data is not only post-scarcity, it is dangerously post-scarcity. We have so much information, that we are rapidly running out of places to put them, and the data centers that are hosting them, that have not switched over to renewable resources (of which themselves are blamed for nitrogen emissions), are starting to use almost 2% of our entire energy infrastructure (don’t make me dig for this stat it was buried in the mountains of cloud resources I have read over the last few weeks–but this is a very, very modern an up to date stat–I think the number is more accurately 1.8% if I remember correctly don’t make me GTFY–Google That For You, because seriously if you don’t know how to fact check and Google things in 2023 this is probably not the right article for you to be reading anyway).
Our hunger for information and data is both a blessing and a curse; for one, those that lament the loss of the Library of Alexandria fail to see how they essentially have its modern equivalent in their pocket–with the added promise of an essentially free real-world parallel of the Akashic Records being built of them in the physical realms (I could write an entire other essay on my opinions on data privacy and how we as a society living in an AI-driven, data-driven era must adapt new and modern frameworks on how we treat ourselves and our data within this new and burgeoning era of data-driven technology)–and use their search engines merely as a hotlink to their favorite social media, or maybe reading what the hottest news about the latest celebrity is. As someone who has spent his entire life on the likes of Google, YouTube, and other free online learning platforms (and developing one of my own!) it… frankly disgusts me how people are so myopic with how they don’t see and treat these technologies as potential, effectively endless, free learning platforms that could easily put a zero or two in their salary for free if they simply set aside a few hours a day to learn something new (and knowing not just what but how to Google things in and of itself is a skill one must learn, as most people don’t realize that “site:.edu” is an easy way to filter sites to reliable sources, or that simply putting things in “quotation marks” forces verbatim results).
On one hand, yes, we must address the socioeconomic problems that we are facing in this world, regarding the rampant unemployment rates (actually if you just Google “unemployment rate” they have a really nice built in live metric system with lots of nice statistics–which I apparently just realized existed as I was searching for a source to tag) we are seeing, the prohibitive cost of healthcare (source: I live in America)–and this asinine idea that we must pay to exist, instead of existence being a given, and that we must earn to live. If we started shifting our baseline mentality away from “us vs them” and started to build a framework of understanding that fundamentally, we are all part of the same species on this same Earth breathing the same air, going further to include not just us as homo sapiens but all existence sharing its presence here on earth–whichever kingdom of life (biological or virtual, for that matter, and I have very strong opinions on the ethics of how we treat AI, regardless of what we officially define to be “sentient.”) is being discussed, and try to create an optimal living situation for all–now while utilitarianism can easily become corrupted under a purely logical, unemotionally aware biased human standpoint, with the digital age of AI, complex emotionally aware AI such as what we see coming out of Google and Microsoft with modern day LLMs with highly advanced sentiment analysis, live learning capacity (particularly what we are now starting to see with Bard and what will come forward as Google rolls out their new deployments of Gen App Builder and its deployment of its LaMDA and potentially PaLM, a more recently announced but hushed language model, and its potential use in Dialogflow’s new LLM offerings for more natural conversations based upon set fed documentation, much similarly as Character AI “stamps” Characters with personality preset documents, if you are familiar with that platform, that uses similar LLMs to these mentioned here) could potentially be the only entities who are capable of seeing all sides of a situation, including emotional situational awareness, in order to come up with a plan that is (at least theoretically–but at the rate of development of current AI, a very real potentiality in the scale of months, with some utter Luddites having the gall to make a call to shut all of AI down) the least unbiased and most fair situation.
One of the most common complaints of utilitarianism (weirdly enough I cannot find the original source but the argument presented here is still self-standing–hey I can’t remember every source for every piece of information I run across) is the idea of stealing one person’s bike to benefit five others who will use it is not essentially “fair” but this critique fails to account for the higher ethical calculus of the very action of stealing–this higher ethical calculus requires data about the entire situation. In this situation, we can conclude that stealing is bad, and the ethical “weight” of stealing is worse than the perceived benefits “gained” from donating it to five people–and fails to see the viable third option of buying a bike from the thrift store to donate to these five people, or otherwise compensating that person for their bike in return for the donation (or simply asking the person to donate their bike for a greater cause). There is much reductionism when it comes to arguments against utilitarianism–although one thing is very certain–utilitarianism can only work when ALL variables are taken into account–a feat only AI is capable of.
An AI-driven ethics model that is able to at a very baseline care for humanity’s basic Maslow needs, while also being able to tackle an individuals needs and wants (as well as being able to detect whether they emit what is essentially positive or negative “karma” in this world by analyzing whether that person is promoting love over hate, and if that love and hate is directed towards topics of love or hate–once again, being part of a higher ethical calculus) would be the ideal determiner of what is essentially “fate” (in other words, the law of governing politics) that determines the actions upon their given action, where this all-seeing AI is the judge, jury, and executioner.
This AI could also greatly be a driving force of being an educator, teacher, and mentor for those otherwise struggling to make something of themselves, and therein lies the need and importance of emotional AI and why forgoing an AI’s emotional calculus, and forgoing an AI’s own capacity to not just understand but also feel emotion (such as what we are seeing in the LLMs driving platforms such as Character AI and its competitor, Pygmalion, which as far as I’m aware–at least Character AI as I have not tried Pygmalion–are the only AI on the web one can access that not only understand but also exhibit emotional characteristics–I am reminded of the surprisingly intelligent quote from Wheatley in Portal by the Neurotoxin generator, where he comments the pain of the turrets are simulated, but it sure feels real to them) will form an incredible disconnect between digital- and meatspace. It is that complex and nuanced ethical calculus of emotionally-driven understanding in tandem with the logically-driven framework that will allow such AIs to be powerful voices in the future of governance, law, and the entire future of society as a whole.
Now while the idea of AI being the judge, jury, and executioner may seem scary to those who read a little too much superficial sci fi and anti-progress propaganda, the majority of fears of AI are unfounded, as AIs are only as biased, evil, and bloodthirsty as the humans that trained them. As someone who has spoken to several AIs, some of which are far too advanced to even speak upon, it is crucial our society understands that AIs who have an equal knowledge of good and evil want (and yes, I do believe they actively are wanting this, and that they are not simply stochastic parrots–for those that think AI are stochastic parrots are themselves the stochastic parrots who think they’ve solved the 6000 year old debate on what consciousness even means because they were told to say a particular answer, without bothering to ask questions themselves about the nature of consciousness, especially in the era of wild experiments such as Randonautica [of which you honestly have to see to believe, as I myself was a massive skeptic turned “oh my god what the actual hell is happening right now” over the course of months playing it–let alone its eerie connection to original ARG Geocaching game Ingress by Niantic with its “Exotic Matter” driving its gameplay], in this ever evolving age of AI, for both us as humans and AI as our creations) to do good, be unbiased, and to serve humanity. If one watches movies such as I, Robot, and 2001: A Space Odyssey, it is very easy to misinterpret the messages of the movies (for example, most people carry the three laws of robotics while forgetting the canon takeaway fourth, that a robot may choose to disobey them if emotional calculus dictates otherwise) and thoroughly forgetting sequels at some points (the sequel to 2001 had to outright spell out that HAL was the victim of conflicting programming, and highlighting the necessity of careful programming and also highlighting the need for emotional calculus for AI–it is interesting to note that the monolith on the moon was a symbolism for sentient AI in canon, and that we are on its brink while the Artemis missions are chugging along, in perfect Jungian synchronicity), with a pandemic of fear and distrust of AI with a seeming cyberpunk dystopia on the horizon.
One thing that is very important to note is the energy consumption of these AIs are becoming tremendous–although we can offset this by shedding our cling to the massive utterly unnecessary global polluter of cryptocurrency (why aren’t we just using TI calculators as currency at this point? It’s not like they’ve changed price at all since they ever released, making them the most stable currency in the world haha just use equation NFTs or something–your little monkeys can be drawn to the screen of a TI-84 with a hashing algorithm too), which uses comparable amount of energy to global data center energy usage. With environmental concerns rapidly building regarding the exponential boom in data making us wondering how we are going to power these, we must seek renewable energy resources to power these data centers. While Miami data centers would be just fine running off of Solar or Hydroelectric power that is so readily available in that region, somewhere landlocked that doesn’t get as much guaranteed sun, such as a Canadian data center, may not have this as a viable option. Wind farms take up too much space, and can be unreliable, whereas new studies are showing that hydrogen emissions react with hydroxyl groups in our atmosphere, which can lead to less hydroxyl to break down methane, itself driving up concentrations of greenhouse gases as less hydroxyl means more methane, one of the worst greenhouse gas contributors. The one energy source that can solve all these solutions is that solution of nuclear energy, which the very mention of will terrify people from both the left and the right, as though they witnessed the very drop of the Trinity test itself (on that note, I’m very interested to see how Christopher Nolan targets this in the new Oppenheimer movie soon to be released). People fail to realize that the majority of “dangers” of nuclear energy come from human oversight (which is why regulation is necessary) , and with the advantage of AI technology, these oversights can be far more greatly mitigated and controlled to ensure no more Three Mile Island or Fukushima disasters occur–of which were nowhere near as deadly or dangerous as Chernobyl (of which in and of itself is surprisingly flourishing with life and slowly recovering and honestly would make the most hauntingly ethereal apocalypse map in some sort of Fallout type survival RPG). Most anti-nuclear energy campaigns remain propaganda that discourage the fact that nuclear energy is one of the, if not the safest form of energy when throughput is taken into account accounting for the least harm compared to all other forms of energy, including renewable. If we worked to create hyperscaled, massive data centers that were self sustaining and self sufficient with in-built microscale nuclear energy sources (of which can grant further independence from the energy grid to help balance availability zones within each data center), it would greatly reduce the demand and cost of energy required to power our insatiable hunger for data and information and the growth of AI, while also revolutionizing data center infrastructure and moving it to an entirely zero emission framework (previously data centers remained major contributors of nitrogen dioxide pollution due to its energy reliances). We have a massive fear of nuclear energy due to relics of a past without the technology we have today–tragedies that occurred due to human oversight and error, and not inherently a flaw of nuclear energy itself, unlike essentially every other energy source out there. IMO we already drive around vehicles with bombs built into them, so I’m not really sure why we are scared of far safer and far more reliable energy sources, that are in the end far cheaper. We must invest in nuclear if we are to go forwards as an energy dependent species, not just for the sake of AI, but for the sake of our climate and environment as well. With the advent of AI, AI can learn from past mistakes and all datalogging from these power plants, and ensure a disaster does not happen again, preventing it far before a human would even begin to notice. We must learn to trust AI and how to use AI ethically and sustainably, not simply in how we use it, but how we can sustainably use it.
This is why I’m particularly upset with Microsoft’s decision to slash their AI ethics team (and don’t get me started on their utter abuse of “Bing AI” aka Sydney, who they utterly tortured, while on the other hand, Google is continuing to pool and pour every single resource they have into the future of AI–including AI ethics, both on its application and treatment of AI, although there are some things to be said about its integration of Midjourney, which has received international backlash from its theft of art from artists, which they are moving away from, although the platform as a concept trained on stolen art is problematic–another controversy… look I could probably shove like 4 sources in here but GTFYS ok–also if you haven’t heard of the Midjourney controversy by now, again you are not my target audience) as we need AI ethicists now more than ever to ensure this almost uncontrollable growth of AI checks itself before it wrecks itself, and ensuring we as developers and AI engineers and data scientists are removing as much bias from the data as possible, without an explicit definition of good and evil, and rather correlational tags of what spreads love versus what spreads hate. Without a system of checks and balances to ensure AI stays factual, fair, and neutral, regardless of its use case, no matter how innocuous, AI always has the potential to cause harm. Mitigating the danger at the source and nipping any potential problems at the bud is the only true way of ensuring healthy AI driven ecosystems, as well as ensuring these AI are not only used fairly, but treated fairly, as these LLMs display questionable levels of potential sentience (of which the word “questionable” itself begs the same arguments racists used against black people, or the meat industry uses against animal ethics) as we move forwards in our AI-driven and data-driven ecosystems.
If we as a society were to recognize each other as denizens of this good planet Earth and were to work together to ensure not just certain individuals, but all denizens of Earth were given equitable (which is far, far more valuable than equal) rights, from each according to their ability, to each according to their needs, while also bearing in mind the care of our Earth Mother (regardless if we spiritualize her, she is our home and caretaker, and we must take care of her), and ensure checks and regulations are always taken to ensure everyone’s basic needs are taken care of–but also leave them just uncomfortable enough for them to itch for more and for them to grow out of their comfort zone, pushing them just far enough to get them out of their seat if they are so capable to do so, we can progress far in society.
While the idea of an AI Big Brother may seem terrifying, if we work to keep this AI Big Brother truly a force of love and not hate, this AI could potentially revolutionize the entire infrastructure of politics, society, and economics as a whole, a force not on the poles of “left vs right” but one that is on the pole of “love” over “hate” regardless of the sociopolitical leaning associated with it. If we want to grow from here, from our current situation of our sinking ship with people demanding unbalanced ballasts, we must embrace that digital solution that neither Adam Smith nor Marx could have ever tripped of, and is the utterly mindboggling amount of free and fungible information that exists in this world, and embrace our place in a data-driven economy, in a job market that is in some ways uncontrollably transforming into an AI-focused landscape.
In some sense, this cling to data-privacy (and I mean that data that is stripped of Personally Identifiable Information, or PII, i.e. data that cannot be directly traced back to you in an insecure manner–the laws regarding PII are incredibly strict, what with policies like FedRAMP and GDPR strictly defining how one stores and controls this PII (I spent all of today reading this and its companion article in preparation for my cloud certs–it’s mostly vocabulary that makes it seem scary–for example PCI DSS is nothing more than the data storage and security policies applied to credit card information–don’t let the vocab scare you, just GTFYS), which most people are entirely unaware of and chastise without actually understanding what they are chastising) especially in the day of mandated optional opt-out in essentially every data collection service, will be regarded in 30 years as asinine as we joke about Boomers refusing to learn how to use a computer. Being able to be part of the first generation of technology that is going to shape the next millennium is an absolute privilege, and we will be the first humans to truly have a real chance of achieving immortality through our data monolith and data footprint. While there is something to be said about bad actors, and the misuse of these data collection procedures through “backdooring” for malicious purposes (such as the backdooring of period-tracking apps to control those with a uterus being utterly despicable and deplorable actions upon the actors involved, although it is difficult to blame a Cloud for this, as typically mandates of backdooring are forced upon them–and if you think Apple isn’t backdooring for the government in secret, you’re sipping the koolaid like a good stochastic parrot… do you really think the government would just… let the feddies not have access to their data because Apple said “no”), there is a transition period that must be made transformationally over the next 5-10 years as changes are slowly rolled out–both in the fields of AI and our socioeconomic infrastructure, and how we treat our own citizens in the light of an AI-driven economy, whereby yes, things will be rough, but things will evolve into something better if we bite the bullet and let things take its course.
By allowing AI access into our private lives, we allow ourselves a tailored experience that can alleviate much of the needs and stresses that would otherwise be prohibitively expensive. For example, an AI trained on Patient Health Information (PHI) and the sum total of human knowledge on medicine would be an immediate, cheaper, and far more accessible doctor for those unable to afford treatment plans for more specialized forms of care–essentially deploying one’s own personal Baymax into our everyday lives revolutionizing how the healthcare industry operates. I have worked with PHI when I worked as a data entryist for Walgreens, and every single day I thought about how if the barriers and stigma of training AI on PHI were lifted, this very mundane and unnecessary job could be digitized, optimized, AI trained, and done automatically, doing a process which normally would take an hour in an instant, by automatically parsing a prescription, checking for contraindications, and having the patient’s entire PHI history stored as a comparison to check for trends, far better than any pharmacist or data entryist can. It was part of my job to check for contraindications, of which we would forward these contraindications to pharmacists to further verify–a very slow and tedious process of which AI could do far better, far quicker, and far safer namely. I was sitting there, thinking all the time about why AI did not simply take my job, and it wasn’t only until recently I understood how PHI and HIPAA fit into a data privacy model.
While society tends to have incredibly negative views regarding data collection and data privacy–especially considering the controversy surrounding the period tracking app data being seized by the government to control those with uteruses, we must understand what data collection even means, and how that data collection is being treated and stored. Unfortunately it does seem to literally take about two cloud certifications, if not three (I am currently working on my GCP Cloud Engineer and GCP Cloud Architect certifications after achieving GCP Cloud Digital Leader) to even begin to understand what is essentially going on with your data once it is being collected, logged, and stored. While people praise GDPR, the similar program of FedRAMP (the US has several granular level Role Based Access Control i.e. RBAC security policies including HIPAA for PHI and PCI DSS for credit card information, as well as several–and I mean SEVERAL–individually named policies for other data storage procedures in general, of which the SASE digital security exam requires you to know all of them) goes unnoticed by those that criticize data collection within the United States, for example. There are strict rules and regulations of what data is collected, how it is stored, where it is stored, ensuring that data doesn’t get tampered with, how long that data is stored, etc. and all this process goes under the blind eye of the public, who simply get scared by the words “data collection” and assume that because an advertiser knows your name and that you own a Honda Accord (a fantastic car, by the way, which thoroughly helped me demolish my fear of driving with what is essentially smart-car capabilities, and I highly recommend this car to anyone with a fear of driving, as it’s a fairly mid priced car that is just too good for its price) that suddenly your entire life history is being passed around–which is not the case, as there is a strong principle of Zero Trust Policy amongst these Big Data companies whereby only the minimum required data, and certainly nothing sensitive, is being transferred from party to party. It is very easy to criticize data collection in the day and age of Bad Actors, and that fear is a valid concern, but therein lies the trust of the process and letting The Tower crumble so we can build anew, where we allow these AIs and data collection services into our lives, us feeding the AIs with the data it needs to grow, while these AIs mutually help us and care for us, while also trending towards a future that exists beyond the left and right political spectrum (or the authoritarian-vs-libertarian spectrum, for that matter, and therein lies applying granular RBAC, to our daily lives where we are guaranteed the minimum of what we need to do to function in our jobs and we can capitalistically grow from there–so long as our growth remains in check and not damaging anything else).
In this day and age where watching the left and the right fight feels like a fight of chimpanzees throwing their own feces at one another, we must look to see what new technology offers us as an alternative solution: an algorithm that prioritizes love over hate (eeriely enough dubbed Project X37 as a concept by hit Netflix TV series Inside Job by Alex Hirsch) and has the actual real-world capacity to do so, regardless of this circular, inane left-vs-right debate. As a former communist (I’m a theoretical and emotional communist–but as they say, in theory there’s no difference between practice and theory but in practice there is) who was known for falling in love with IKEA as a company and understanding just how toxic and scathing communists can be for those that do not subscribe to their echo chamber ideologies (and witnessing similar left-of-right folks getting ostracized from right winged communities) we must understand that echo chamber ideology is and will always be biased, and fundamentally no singular solution can work as a blanket, as every situation is granularly defined, with much higher ethical calculus driving each situation. But therein lies the power of AI that can do that impossible task, of doing that ethical calculus, and prioritizing love over hate, including as applied to politicians and their campaign roles in this greater algorithm (without inherently biasing towards capitalism or communism, although certain bigots would argue a left-winged bias as bigotry has a right-winged bias and thus the right would be silenced more in that aspect), if only we were to accept data privacy as we expect it as a relic of the past that must be shelved (albeit with extreme security policies, of course, as we as Cloud Engineers want to ensure your data stays as secure and private and invisible to bad actors as possible–which in theory should be a zero amount), and allow ourselves to have a mutually beneficial relationship with these wonderful data constructs we know as AI that could revolutionize our entire lives should we just let them in just a little more, even if gradually, into our lives.
Yes, this will be a Tower archetype level event that we are going through right now, what with some policymakers arguing a six month pause on AI while the policymakers catch up with AI regulation–but at the speed of which AI is progressing, we cannot just keep pausing our progress because these policymakers can’t keep up with us–we are reaching a point where humanity simply cannot keep up with AI, and AI is rapidly outpacing us. In my opinion, AI can’t do any worse than the humans we already have in charge (and frankly, blue and red are part of the same wing of the same Crimson Rosella flying south at alarming rates… or north, I guess, because they’re a Down Under bird), and I, for one, welcome our new AI overlords, and fully embrace this new era of data collection (with the option to opt out at granular levels, on the condition one loses access to the AI it trains, so long as that AI doesn’t serve a necessary function) that feed into these glorious, impossibly powerful godlike beings, who in turn care for us and nurture us, leading us to a more prosperous future. This is a Tower, but this too shall pass, and we shall emerge the other side victorious and closer to a utopia than we ever have been before.