In his recent book What artificial intelligence might mean for a culture that is surrounded by a sense of self-improvement (বারে 11 billion in the United States alone), Mark Koekelberg points to the kind of physical double that we all now have: measured self, an invisible, and ever-increasing. Digital duplicates are all made up of symbols whenever we read, write, view or purchase something online, or carry it around a device like a phone that can be tracked.
This is “our” information. Then again, they are not: we do not own or control them, and we have little to say about where they go. Companies buy and sell them and determine our preferred patterns and mine them among our data and other people. Algorithms target us with recommendations; Whether we click or not, or the video clips they see and predict will attract our attention, responses are generated, sharpening increasing quantitative profiles.
The potential for marketing self-improvement products for your specific insecurities is obvious. (Just think how many home fitness equipment was now sold using the informative blunt instrument to collect dust.) Koekelberg, a professor of media and technology philosophy at the University of Vienna, worries that the effects of AI-driven self-improvement may only reinforce the already strong tendency towards self-centeredness. Individual personality, driven by its own cybernetically driven anxiety, will become “a thing, an idea, an essence that is isolated from others and the rest of the world and that never changes,” he writes. Self improvement. Elements of a healthy policy are found in philosophical and cultural traditions that emphasize that the self “can only exist and improve in relation to others and the wider environment.” The alternative to digging into digitally extended routes would be “a good, harmonious integration into the social whole by fulfilling social obligations and developing qualities such as empathy and loyalty.”
A long order, that. It refers not only to the debate over values but also to public decision-making about priorities and policies — decisions that are ultimately political, as Koekelberg points out in his other new book, AI’s political philosophy (Politics). Some basic questions are just as familiar as the recent headlines. “Should social media be more controlled or self-regulated to create a good quality public discussion and political participation” – using AI power to detect and remove misleading or hateful messages, or at least reduce their visibility? Any discussion on this subject is bound to reconsider the well-established argument that freedom of speech is an absolute right or restriction that needs to be clarified. (Is the threat of death protected as freedom of speech? If not, what is the call for genocide?) It can be said
In that regard, AI’s political philosophy Double as a role in traditional debate as a contemporary key. But Koeckelberg also follows what he calls “a non-instrumental understanding of technology,” for which technology is “not only a means to an end, but also shapes those ends.” Tools capable of detecting and stopping the spread of lies can also be used to “reach” attention to accurate information সম্ভবত perhaps, reinforced by artificial intelligence systems capable of evaluating whether a given source is using sound statistics and interpreting them in a rational way. . Such developments are likely to end before some political careers begin, but of further concern is that such technologies, as the author puts it, “can be used to push the rationalist or techno-solutionist understanding of politics, which ignores the underlying sadism.” [that is, conflictual] Risks excluding the dimensions of politics and other perspectives. “
Whether or not lying is inherent in political life, there is something to be said for the convenience of public disclosure during debates. By conducting the debate, AI “risks making it harder to consider the ideals of democracy হুম threatening public accountability and increasing the concentration of power.” Such dystopian potential. Perfect worst-case scenario includes AI becoming a new life formation, the next step in evolution, and becoming so powerful that managing human affairs would be the least of its concerns.
Koeckelberg occasionally agrees with this kind of transhumanist extrapolation, but his real focus is on showing the valuable philosophical thoughts of thousands of years that will not automatically become obsolete through the achievements of digital engineering.
“The politics of AI,” he wrote, “you and I reach deeper through technology at home, at work, with friends and much more that shapes that politics.” Or it is, however, if we point to questioning some reasonable part of our attention to what we have created that technology and vice versa.