Friday, November 22, 2024
Technology

The year of ‘does this serve us’ and the rejection of reification

2024 has arrived, and with it, a renewed interest in artificial intelligence, which seems like it’ll probably continue to enjoy at least middling hype throughout the year. Of course, it’s being cheerled by techno-zealot billionaires and the flunkies bunked within their cosy islands of influence, primarily in Silicon Valley – and derided by fabulists who stand to gain from painting the still-fictional artificial general intelligence (AGI) as humanity’s ur-bogeyman for the ages.

Both of these positions are exaggerated and untenable, e/acc vs. decel arguments be damned. Speed without caution only ever results in compounding problems that proponents often suggest are best-solved by pouring on more speed, possibly in a different direction, to arrive at some idealized future state where the problems of the past are obviated by the super powerful Next Big Thing of the future; calls to abandon or regress entire areas of innovation meanwhile ignore the complexity of a globalized world where cats generally can not be put back into boxes universally, among many, many other issues with that kind of approach.

The long, thrilling and tumultuous history of technology development, particularly in the age of the personal computer and the internet, has shown us that in our fervor for something new, we often neglect to stop and ask ‘but is the new thing also something we want or need.’ We never stopped to ask that question with things like Facebook, and they ended up becoming an inextricable part of the fabric of society, an eminently manipulable but likewise essential part of crafting and sharing in community dialog.

Here’s the main takeaway from the rise of social media that we should carry with us into the advent of the age of AI: Just because something is easier or more convenient doesn’t make it preferable — or even desirable.

LLM-based so-called ‘AI’ has already infiltrated our lives in ways that will likely prove impossible to wind back, even if we wanted to do such a thing, but that doesn’t mean we have to indulge in the escalation some see as inevitable, wherein we relentlessly rip out human equivalents of some of the gigs that AI is already good at, or shows promise in, to make way for the necessary ‘forward march of progress.’

The oft-repeated counter to fears that increased automation or handing menial work over to AI agents is that it’ll always leave people more time to focus on ‘quality’ work, as if dropping a couple of hours per day spent on filling in Excel spreadsheets will leave the office admin who was doing that work finally free to compose the great symphony they’ve had locked away within them, or to allow the entry-level graphic designer who had been color-correcting photos the liberty to create a lasting cure for COVID.

In the end, automating menial work might look good on paper, and it might also serve the top executives and deep-pocketed equity-holders behind an organization through improved efficiency and decreased costs, but it doesn’t serve the people who might actually enjoy doing that work, or who at least don’t mind it as part of the overall mix that makes up a work life balanced between more mentally taxing and rewarding creative/strategic exercises and day-to-day low-intensity tasks. And the long-term consequence of having fewer people doing this kind of work is that you’ll have fewer overall who are able to participate meaningfully in the economy — which is ultimately bad even for those rarified few sitting at the top of the pyramid who reap the immediate rewards of AI’s efficiency gains.

Utopian technologist zeal always fails to recognize that the bulk of humanity (techno-zealots included) are sometimes lazy, messy, disorganized, inefficient, error-prone and mostly satisfied with the achievement of comfort and the avoidance of boredom or harm. That might not sound all that aspirational to some, but I say it with a celebratory fervor, since for me all those human qualities are just as laudable as less attainable ones like drive, ambition, wealth and success.

I’m not arguing against halting or even slowing the development of promising new technology, including LLM-based generative AI. And to be clear, where the consequences are clearly beneficial — e.g., developing medical image diagnosis tech that far exceeds the accuracy of trained human reviewers, or developing self-driving car technology that can actually drastically reduce the incidence of car accidents and loss of human life — there is no cogent argument to be made for turning away from use of said tech.

But in almost all cases where the benefits are painted as efficiency gains for tasks that are far from life or death, I’d argue it’s worth a long, hard look at whether we need to bother in the first place; yes, human time is valuable and winning some of that back is great, but assuming that’s always a net positive ignores the complicated nature of being a human being, and how we measure and feel our worth. Saving someone so much time they no longer feel like they’re contributing meaningfully to society isn’t a boon, no matter how eloquently you think you can argue they should then use that time to become a violin virtuoso or learn Japanese.

source

Leave a Reply

Your email address will not be published. Required fields are marked *