Hacker Newsnew | past | comments | ask | show | jobs | submit | readonlydanger's commentslogin

Unfortunately, many of your assertions are inaccurate. As far as game playing goes, the ways in which AI agents beat human opponents is simply a matter of computers being able to process more information in a given period of time. The selection and evaluation criteria are actually pretty similar to human decision-making, if a bit more systematic. I can expand on this point in another comment, if you'd like.

Chess and Go are not solved games, and will likely not be for some time (10+ years, for Go at least). Solving non-deterministic problems are even harder, and many likely will not be accomplished within our lifetimes[1].

There is no AI system that can generally make more accurate diagnoses than a physician. We're not there quite yet, and it will be a while. The best we've come are some pretty advanced expert systems, but these require very heavy input from physicians[2]. Please provide sources to validate your claims on the progress of AI. Extrapolation from data is dangerous, extrapolation from falsity is ignorance.

As far as your take on understanding goes, the human "meaning" of things is inherently subjective. If we define "meaning" as some thing's value or role based on environmental context, then just like humans, any artificially intelligent system will only be able to determine meaning based on observations of the environment and both individual and collective experience.

It's interesting that you seem to be focusing primarily on social domains that have inherently "human" contexts. Beyond misunderstanding the point of this paper, and the way AI systems work, I think you're missing the point. AI is, and for the time being will remain, an extension of the human mind. The decision-making needs to be developed (at some core level) by a human. Those goals need to be set by a human. The experiences and observational capabilities ultimately need to be determined by a human. Even AI systems that build other AI systems need to be directed to do so, and with strict goals, set by a human [3].

I highly suggest you take a moment to read openai's mission statement: https://openai.com/about/. AI is a tool. And like any powerful tool it must be used responsibly, freely, and openly. Openai is pursuing this goal and making efforts to ensure that this tool is available to as many people as possible to avoid the abuses implicit to your concerns.

You obviously have an interest in AI and some knowledge of the field, but I worry your comments veer a bit towards fear-mongering. I suggest you use openai and resources like it to enrich your knowledge of both the advances and concerns of AI, because those are important, and we definitely need people thinking about these things.

Ultimately, you are absolutely correct that eventually these systems will probably have the technical capability to influence elections, the economy, and more. But the only way they will is under the direction of humans. It it not the machine you should fear, but the man behind it. The same thing you ought to have been fearing all along.

1. http://fragrieu.free.fr/SearchingForSolutions.pdf 2. https://www.ncbi.nlm.nih.gov/pmc/articles/PMC1307157/ 3. https://arxiv.org/abs/1611.02779


Yes please expand. BUT you are attacking a straw man.

First of all I did not say Chess and Go are Solved!! I said that computers will be able to get solutions in our systems faster than we can and of such high quality that it's pointless to even challenge them (for a doctor or poker player or a man running away from a robot or someone trying to prevent the ruin of their reputation or someone trying to have a trust based relationship with their neighbor in a crazy world).

It is also not true that it is "simply" a matter of computers being able to process more information in a given period of time. This is a major deal.

Imagine a textile worker 3000 years ago making clothes. Now imagine the tread count today of a cheap shirt. No one would think of making shirts of such high thread counts back then.

With Chess and especially with Go, there is just a whole other level of intellence. It misses the "human meaning" part but it can manipulate a vector that involves 10,000,000 variables. It is in terms of such vectors that concepts like "cat", "dog" or diagnoses are made. How to explain why it made this diagnosis in a way a human can confirm?

What I said was actually pretty straightforward: in 17 years computers will be able to rap and make better jokes than humans and convince better than humans. They will be able to hack our systems of reputation, trust, voting, legal argument and so on. What's scary is that we will essentially be giving up control to systems that don't really have the same concepts as we do, and perhaps never will. And if one day it all blows up somehow or gradually shifts to something unpleasant for humans, then that's freaking scary!

Just look at how we are already doing it when it comes to wealth inequality. As a society we are richer than ever but the inequality is greater. This is just a mild example. What if computers could do a lot more?


Happy to! And I'm not sure I am, but you're right in that I should be more explicit in my point (I have a tendency towards logorrhea). I think if I'm attacking anything, it's what I believe to be the subtext of your comments: 'AI = Bad = The Terminator'.

Pardon me if the following comes off as pedantic, I do not know your level of expertise and want to continue the discussion from the same point of base knowledge.

To my game playing point, and to start from a simple example, in Tic-Tac-Toe, an AI opponent will always beat humans because they can search through every possible move to end-game and always make the correct move. For games like Chess and Go, the search space is far too large to search to a terminal board position, so we need to use ML and heuristics to evaluate the value of a given board. These evaluation functions and heuristics are designed by utilizing human insight into the game. IBM's retrospective on Deep Blue is a fascinating read, and I suggest anyone check it out if they're interested in AI and game playing[1]. You'll see how they built their evaluation function with the input of chess grandmasters, particularly in implementing opening books and prioritizing center play in the early game. AlphaGo's system is not entirely dissimilar[2]. You'll note that significant advances were made as AlphaGo continued to play and learn from human opponents.

The point is that both these systems (and all game playing AI agents that I know of) search through possible board states and make decisions that are fundamentally reliant on the intuitions of humans. Furthermore, humans make decisions like this as well. We generate all the possible decisions we can make, and rule out invalid choices either implicitly (though sub-conscious bias and knowledge) or explicitly (thinking through a decision). Computers do not have the advantage of implicit evaluation, so we must program that explicit evaluation and use massive amounts of data (for deep learning, anyway) with ML techniques to validate those intuitions.

Both Go and Chess are deterministic games. Given that even these games haven't been solved, the stochasticity of the various domains you described are orders of magnitude more complex and we honestly need several breakthroughs before we come close (I have no sources here, this is just my opinion. I feel that most of the success of AI right now is the standard M.O. of a lot of Academic CS: Things 'work' for certain definitions of 'work'. The breakthroughs are great and impressive, but the constant extrapolation by pundits and the general media is both irresponsible and fallacious).

And yes, it really is a lot about processing power. Neural Networks have been falling in and out of fashion since the 1950s. One big factor of the recent ressurgance in popularity is due to GPU utilization and cloud computing (admittedly, the availability of data via the internet is another large factor, among others). That's why Google, NVidia, Apple, and others are investing so much into ML specific hardware.

And let's not kid ourselves: training any ML model takes a lot of time and a lot of manual adjustment of hyper-parameters. We're talking about possibly hundreds of hours of manual input for a single model (novel ones mostly). That's why every minor breakthrough merits a white paper (sort of joking, sort of not...)!

I think we're making the same argument with your linear algebra example; that machines can't reasonably replace humans. My amended version of that argument is that machines can and should extend and augment human capability. Despite the linear algebra that happens, any form of decision making and cognition is in someway designed by a human. So despite the vectorization of the world (as seen by an AI), they will process through the lens of human cognition; because I don't think we can build systems that don't somehow stem from our own cognitive processes.

As to your specific fears, I seriously doubt we'll be able to make enough progress within 17 years for AI to dominate those fields completely. I agree that AI will probably become a presence in many of those domains, but I do not think we will be "giving up control". Remember, these systems will be operated by individuals. So, I would say that there is some evidence to suggest that within 10-20 years, humans will be using AI to produce higher quality art, jurisprudence, etc. It is also true, that this raises the possibility for humans to abuse this technology, but this is inescapable for almost any human achievement. I think openness and transparency is the best safe-guard against this possibility, and I would encourage everyone to vocally oppose any integration of AI into public systems without extreme transparency.

Beyond that, humans that have and always will be the cause of wealth inequality. Also, you provide no evidence that inequality is greater. In the 21st century, inequality has increased [I don't feel like sourcing this, but Google is reasonably good here], but I would like to see research confirming that we're worse off than the heights of Feudal society, or other equally tyrannical periods of human history. How would you foresee AI contributing to wealth inequality? I only see AI as a contributing factor to increasing wealth inequality if it remains in the hands of few.

I've definitely rambled here, but I blame that on the few drinks I've had. I think my point still stands; humans kill people, AI doesn't kill people (and we're at a point where I think we can ensure that it doesn't).

As a side-question, what exactly is significant about 17 years? Is there some prediction out there that uses this number, or was it an arbitrary number?

1. https://pdfs.semanticscholar.org/ad2c/1efffcd7c3b7106e507396... 2. https://storage.googleapis.com/deepmind-media/alphago/AlphaG...


Drinks? Fun. But the one area I would like to push back on your assertions is that:

"these systems will be operated by individuals"

maybe and maybe not. In some major sense, even today's systems are bigger than any one individual.

Just because something was designed by people doesn't mean they will be operating it years from now.

There was a period where "centaurs" - combinations of grandmasters and computers - would beat computers. Judging by Kasparov's latest book, he still thinks that's the case. But where is the evidence?

Eventually doctors will just press a button and a diagnosis will come out, and dietary program. They will have a very vague idea as to why. This is actually too conservative. There will be no doctor and no button. The system will know exactly when and where to intervene. Humans will live on a zoo being taken care of like animals do now. And this is the rosy picture.

Already, Watson can outperform people and we don't have a great way to explain why. Any more than the proof of the 4color theorem is an explanation why.

Explanations are reductions to simple things. We humans derive meaning from relatively simple things with few moving parts. Something that requires 10000000000 moving parts to explain may as well be "chaotic" or is not really explained. But if predictions can be made far beyond simple explanations then that's a major thing.

I think that humor, court arguments, detective work etc. can all be automated in this manner. And then there is also the access to all the cameras and so forth.

I'm just saying that our systems were designed with the idea that an attacker is inefficient. That assumption is going to break down.

It doesn't have to be the terminator. It just means computers will write better books, jokes etc. a million times a secod and devalue everything we hold dear. They will first be wielded by individuals - at least that's a comfort. But later, the automated and decentralized swarms are the scariest part because they are so totally different from us in goals and everything else too.


Human skill (and intuition/wisdom?) improves and is sustained by practice in the domain. The real unknown how things will pan out when automation reaches a point where we humans just do not bother putting in the hours because many of the trivial tasks or just lose touch. What happens if a driver relies on just the auto pilot system and gradually loses the skill to drive? Stuff does fail and complex systems will fail in unknown ways.


"Extrapolation from data is dangerous, extrapolation from falsity is ignorance."

Beautifully stated.


Upvotye, but I wonder about this statement:

> The decision-making needs to be developed (at some core level) by a human.

With neural nets you essentially throw a ton of data at them, and then they get better and better at recognizing certain patterns, and then can then 'see' them in new data you provide. As this gets more and more advanced (less training data, and yet less and less false positives), we will start to stray into the area of 'emergent behaviour', where we really are no longer in charge of making the decisions.


I'm not sure where this idea of NN's as a black box came from. It doesn't really work like that in real world applications. Even basic multi-layer perceptrons require adjustment and fine-tuning. There are tons of hyperparameters to adjust, feature engineering to do, and even just cleaning your data sets is a non-trivial task that can't be completely automated (yet).

Also, training a model is not as easy as dumping data in. NN's often suffer from high variance, so you need to constantly make slight adjustments. This cycle of adjust-process-analyze is very time-consuming both in terms of computing time (even on Google's servers training can take a few hours) and human-time.

Sometimes you'll let lucky. You can build a NN that gets accuracy in the ~80% range for handwriting recognition with ~50 lines of code, if not fewer. But that missing 20% is critical for any important task and getting there requires a lot of "parenting". And most times you won't be working with a vanilla NN and you won't be getting more than ~50% to start with.

It's also important to note that NN's are not a panacea; in fact, they're often not the right tool for the job. They tend to be outperformed by simple statistical learning techniques in a variety of tasks. Deep NN's can do a lot, but require a lot of data and constant adjustment of hyperparameters.

The biggest advances and most impressive predictions these days come from a combination of techniques and models, and these ensemble methods require a lot of work on part of us humans. Ensemble learning is where the magic really happens, and by magic I mean tons of work.


> Unfortunately, many of your assertions are inaccurate.

Even more unfortunately, the overarching point, the wood you're not seeing for that group of trees you're dabbling with, totally stands.

> But the only way they will is under the direction of humans. It it not the machine you should fear, but the man behind it. The same thing you ought to have been fearing all along.

Yes, and? That means until we dealt with that, we're just helping others to own the technology that will ultimately allow them to kick away the ladder for good. Yes, technology is neutral, but the human world is as it is now, and how technology could be used in a completely different human world is used as an excuse too. damn. much.

> I worry your comments veer a bit towards fear-mongering.

You'll love this then:

> If machines produce everything we need, the outcome will depend on how things are distributed. Everyone can enjoy a life of luxurious leisure if the machine-produced wealth is shared, or most people can end up miserably poor if the machine-owners successfully lobby against wealth redistribution. So far, the trend seems to be toward the second option, with technology driving ever-increasing inequality.

-- Stephen Hawking, https://www.reddit.com/r/science/comments/3nyn5i/science_ama...

You will find many serious thinkers with similar concerns, and they're not fear mongering, they're throwing pearls to pigs. And people get super squeamish as soon as any possible consequences are spelt out in detail, and by the time it hits one one is too busy drowning, and if one has spare resources to speak nobody would listen to the "obviously jealous" loser. "Inequality" is a neutral word, but it contains all atrocities of humanity. Whenever someone crushed a baby, robbed an old woman, or murdered millions of people, there was an inequality, one was helpless and without someone to be for them, the other was stronger and without someone to stop them. That's what inequality means, that bad shit happens.

Sure, meaning is subjective. Yeah, it's subjective. So is morals. Essentially, who is to decide whether Stalin or Hitler were monsters, since they were fine and dandy in their own eyes? You seem to claim that fear mongering and ignorance are bad, isn't that also a subjective assessment?

> AI is, and for the time being will remain, an extension of the human mind.

Later on you talk about individual humans doing specific things. That's something real, "an extension of the human mind" is just rhetoric.

> Please provide sources to validate your claims on the progress of AI. Extrapolation from data is dangerous, extrapolation from falsity is ignorance.

Do you have any data on hard, mathematically proven safeguards? Or hey, prove that humanity cannot survive without AI (or actually, just without slowing some things while we sort out the power problems); but don't ask me to prove that humanity might die with it developed as it is with the current distribution of power first.

> Openai is pursuing this goal and making efforts to ensure that this tool is available to as many people as possible to avoid the abuses implicit to your concerns.

Ensure. Oh that's such a relief that these guys are making sure nothing bad can happen. No wait I misread, they're only ensuring to make it available to as many as possible, which is caveat number one, and even that with the intention of avoiding horrible abuses. That, without further qualifications or evidence, is about as convincing as me singing a song with the intention of turning the sky green with it.

I for one am not afraid, that's just wishful thinking. Try disgusted and bored.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: