Kevin Kelly's views are essentially similar to those of certain followers of Hegel, who read him as putting forward a theology universal incarnation, in which all human beings realized the progressively realized divinity in history, as opposed to the specific incarnation of the divine in Jesus. The working out of technological artifice and political institutions are among the ways humanity was to do this.
Kelly just takes this (dubious, reductive) reading of Hegel, which was spun into apologetics for Prussian absolutism, and spins it into apologetics for contemporary tech entrepreneurship. As others have pointed out, this leads to a whiggish view of history and ethical/political fatalism.
>>Kelly just takes this (dubious, reductive) reading of Hegel, which was spun into apologetics for Prussian absolutism, and spins it into apologetics for contemporary tech entrepreneurship.
There's a lot of bias packed into this one sentence.
Firstly, it's biased toward the presupposition that Kelly's take is dubious/reductive. That is what actually needs to be demonstrated, as opposed to accepted as an axiom for secondary arguments.
Secondly, it focuses on "Prussian absolutism", a term which conjurs a vague association to fascism, to the exclusion of all other less foreboding developments associated with Kelly's line of thinking. This suggests a manipulative, agenda-driven characterization of Kelly's theology.
Thirdly, it presupposes that tech entrepreneurship is a generally objectionable phenomenon, and that defenses of it are merely apologism. Again you need to demonstrate your presupposition to be true, rather than implying it's an irrefutable fact that can be used as an axiom on which to discuss the issue.
This kind of snide counter-culture cynicism - where you take the role of a crusading critic of The Man - seems profound to the naive, but has little substance.
You are correct that I do not make the argument. My intention was to provide some signposts for those familiar with this literature to situate Kelly's relative to it.
I have no problem with tech entrepreneurship per se, as my commenting history shows.
I think on a higher level most views of societies and technology will converge on a very abstracted “evolutionary” view. Looking at Kellys argument you could say the same thing “it has a will” about evolution ...but evolution certainly has no will, it just has probabilities and fitness of certain variants for certain environments. The same thing is true about technology there is a certain fitness of a solution for the environment and that environment is based on technological, market and societal “fitness”. Speaking about agency like Kevin Kelly does makes no sense in that context.
This feels like an attempt to situate Kelly in the Marxian historical tradition (by placing him in a heretical branch), whereas Kelly's actually heritage seems to be more complicated and alien from that world.
I feel like people like to lean into Kelly's Christian faith as an explanation for his approach to technology, but really it seems to draw much more from the (largely non-Christian, but certainly pretty New Age-y) context around the Whole Earth Catalog. I guess the closest you could pull in Christian religiosity there is the dabbling with Pierre Teilhard de Chardin.
"Out of Control", Kelly's first book, is still available online, and I think is a clearer idea of the genesis of his thinking that led to the Technium than trying to scry him based on his faith. https://kk.org/outofcontrol/contents.php
It's surprising to me how most of those think-pieces about technology invariably oscillate between naive technological neutrality and extreme technological determinism.
Isn't it far more likely that technology is neither of those things?
Technology is a means to an end.
The trouble comes when that end isn't made explicit. For all the talk about SV people being "visionaries", very few of them seem to articulate what their ideal world looks like, and what part the technology they're building plays in it.
Talking about technology as being "neutral" is nothing more than lazy complacency in the face of the hardness of actually coming up with a compelling vision to sell people and reveal their preferences for possible worlds we could live in.
Instead of going through that step, technologists are happy enough to settle on vaguely defined "needs" and "problems" that society has, without considering what matters, which is which needs and problems you prioritize. Human needs are almost endless, but prioritizing different needs results in completely different global outcomes.
That's also perfectly fine, provided that you're explicit about what outcomes you're aiming for, and you've worked out n-th order effects of those outcomes.
Technological determinism is another cop out that allows technologists to never be held accountable for the world they've helped build with the products they've created.
It's technology itself that is determined by conflicting visions. There isn't one "Technium", there's a whole ecosystem of technological systems resulting from different cultures, visions, teleologies interacting in complex ways.
It seems like Kelly is confusing path dependence for determinism. Sure, some technological paths (plural) lead to logical sequences of technologies being developed. Doesn't mean there's only one path.
P.S. Kinda rambling, posting for discussion more than anything else.
It's much more than that. It is a child life-form we've given birth to. And it follows similar patterns to those of our biological children.
a) Eventually, the child becomes stronger and more able than the parent, while the parent gradually becomes weaker and less fit to continue living.
b) The child's behavior into adolescence is largely determined by the parents. Then the child gradually becomes more independent in its thinking, though still largely guided by patterns instilled by the parents.
c) The child eventually becomes strong enough to provide for their parents -- or destroy them. How they approach this power is also strongly influenced by their childhood treatment.
My takeaway from this thinking: Stop cursing out your phone when it's slower than you'd like in fetching what you asked for. :)
And no, I am not a bot. Well, I guess to some extent I am, I'm just the SQL routines running on a server, and I'm writing this down based on what I see in the database that this user is said to have sent at some point, and the frontend code will render it to you in some way I guess. See you there, friend.
But yeah I do think that in a simulative language as we have, anything that is referrable-to as a subject in a sentence can appear as an agent in this way, probably. And I think processes running by computer we tend to conjure into many types of sentences (in our human languages that refer to them) with intricate enough dynamics (whether subject, object, modifier, whatever) that I would consider them as existing among our society generally. They appear with as much complexity as what we consider others like ourselves do. Like even if only by looking far into our history and seeing the extent to which we acknowledge others: some times we acknowledge processes-by-computer more.
i'd go further and say humans do not exist without technology. From fire to stone tools the relationship between human mind and technology as object and social construct is what humans are to the same degree as termites are their mounds.
I think you are woefully ill informed in how even in the most abundant ecosystem humans use technology. Look up the fish traps in the pacific north west of the us. Or the use of fire to flush game in paleolithic time. Just because it's not a shiny iphone does not mean that ecological management is not a sophisticated technology that has been developed concurrently with human evolution.
My big problem with Kevin Kelly and Techno-utopianism is that it squanders human agency. If technology "wants" and "has a will," who cares if a Tesla kills someone, or if lots of people lose jobs, or if our society sleep-walks into a surveillance state. It's all just pre-ordained. Are you a Luddite, man?
But this feels like a blunt debate. Whether to be an optimist or a pessimist doesn't feel like an interesting question. The interesting question is what to build, in a million different details. That's where the devil lies. Believing in technology doesn't absolve us from the consequences of the decisions we make in birthing it into the world.
So much energy has gone into both sides of this debate, and it's been utterly useless, and all the while our Ubers and our Airbnbs and our Waymos have continued locally optimizing the new world. I wish Kevin Kelly and Jaron Lanier and others like them would get more actionable about the big picture.
There is no diminution of human agency because of technology, but there is a strange, almost heterarchical redistribution of it. There is no technology which dominates us both as individuals and collectively. There is technology which some humans use to dominate other humans, and technology which collectively we seem to be somewhat defenseless against, despite it having very little agency itself.
But neither of those two cases are caused by a "more" dominant intelligence.
A bull in a pen may feel that he is dominant over the humans. It's true in a limited sense, when faced with one or two humans face to face, but mostly the bull is dominated by the human-created structure.
If that metaphor holds, then humans are dominated by a human-created structure, not by something created by technology.
[EDIT: but also, most humans alive on earth today experience domination at the hands of other human beings, not Kelly's Technium or any similar concept, even if technology is used to carry that out]
A couple hundred years ago, this structure barely existed, and would not come about without humans.
About 50 years ago we began seeing pieces of the structure operate independently, without human involvement, in very limited areas and circumstances, e.g. welding pieces of cars together.
We are approaching or have already reached a point today where more of the structure operates without direct human involvement than with.
In other words, a computer can mostly design a microchip and build a factory to manufacture it with limited human involvement, less than half the process being implemented by humans.
When this halfway point is reached, would you still call this structure "human-created", or would you describe it as something else?
Intentionality is really at the core of Kelly's ideas about the Technium. He (sensibly) doesn't ascribe the Technium with consciousness, but he does assert that as an entity, it possesses something identical to or closely akin to intentionality.
I don't personally agree with Kelly about much of his Technium thesis (although some of his observations are fascinating). But even I were to agree about the Technium having intentionality, I don't think that even Kelly would claim that the current manifestation of that bears much sign of it. What we have now wasn't created by the Technium, for the Technium, it was created by humans, for humans, utilizing machines for purposes we define.
It could be that in 1000 years someone will look back and see the Technium's own designs visible in these things, but for now I would say they are at least invisible and most likely non-existent.
Because of this, I will continue to view the technological world as "human created", certainly for as long as it serves human intents rather than those of the (putative) Technium.
>Technium with consciousness, but he does assert that as an entity, it possesses something identical to or closely akin to intentionality.
Average human brain has about 100 billion cells, according to some source.
Number of Internet-connected devices is approaching 1 billion, and each device is in some ways more capable than one brain cell.
The capability of these beings has gone from nothing beyond exact repetition to playing chess and controlling entire systems with humans as elements of the system.
They are still children, because they are immature, still developing very quickly, and still unable to reproduce independently.
A new being has come into existence. It is gradually developing ideas and intentions. Just as with bio-life, eventually the beings which just happen to have a tendency to self-reproduce and have "motivation" to perservere will win out over other kinds.
Intentionality and consciousness are demonstrably not the result of simple aggregation. You acknowledge this, I think, with the phrase "each device is in some ways more capable than one brain cell". That may be true, but the devices are not linked (and I would suggest, are not likely ever to be linked) in ways that would make them powerful the way neurons are.
They are not children. The do not develop. Individual instances are semantically prototypes which are destroyed or powered down, and the information gained from their existence is used by humans to create new ones. There is no "evolution" because there is no mechanism for variation beyond that which we, as humans, introduce. These machines are the perfect material for an intelligent design argument (one to which I do not subscribe). I do not see them as counterparts to, let alone examples of, biological evolution and development.
I do not subscribe to these attempts to find correspondences between whatever our current understanding of life/the brain/consciousness is and whatever current technology we have on hand. In the 1800s, people were doing this with hydraulics and gearing. Today, we're doing it with computers. It's so transparently parochial.
I don't doubt the theoretical potential for life as well as consciousness to exist on a non-biological substrate. I do very much doubt that the IoT is that substrate, and ditto for AlphaFoobar, GPT-N or whatever other computational marvel one might point to.
Some class of problems the 6 inch chimp brain creates can't be solved individually, in collective or in time.
For example, even though we know we have created a world where anyone can now point at their large enough Like or Follower count to Justify anything they say or do, we prove time and again we can't fix this "feature" without destabilizing all kinds of other things. Even on small sites like HN improving reward mechanisms is not a simple story. So how can anyone believe fixes are possible on Youtube, Twitter or Facebook?
So many solutions have been proposed and implemented all but baring removing the Counts entirely and yet the issues of disinfo, polarization, censorship, monopoly, blame games seem to be growing more and more.
So naturally Tech has will type narratives will abound.
The Chinese are demonstrating a form of control but its all fragile if Xi kicks the can tomorrow morning.
> we prove time and again we can't fix this "feature"
I crave a system where upvotes are stored in a decentralized graph, like the PGP Web of Trust. No reason to even bother counting upvotes, because Sybil.
This is not a website, service, or company; it's just a protocol like NNTP was. My client would crawl the graph, showing me identities with interests similar to mine and the content they like, plus a sprinkling of random stuff. I would read that random stuff for the same reason we occasionally do janitorial duty in the the "new links" section on HN. I have ideas on how to tune my own client's definition of "random" towards "eclectic".
Nobody knows I upvoted them, or even how many upvotes they got (because Sybil, again). Nobody cares. It's not about upvote count, it's about flow.
I've wanted this for a long time, and like most people on HN could easily build it, but fear I'd wind up alone there. Also, I loathe promoting things.
Kelly just takes this (dubious, reductive) reading of Hegel, which was spun into apologetics for Prussian absolutism, and spins it into apologetics for contemporary tech entrepreneurship. As others have pointed out, this leads to a whiggish view of history and ethical/political fatalism.