Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Very curious about what Facebook's response is to this (outside the response mentioned in the talk, which is clearly not sufficient)

Also, if there are any FB employees on here, what do they think of their employer still enabling massive disinformation, astroturfing etc?

To be clear, I'm not blaming individual employees. Just honestly curious how they deal with these issues on their personal moral compass.



That's kind of like asking how Vint Cerf and Bob Kahn feel about intelligence services tapping internet traffic.

Not very useful.

A more cogent question would be "For anyone in Facebook working on this problem, do you feel like sufficient organizational resources are being spent combating it? Is there low hanging fruit, or are we into the whack-a-mole stage of any popular platform with a financial incentive to cheat?"


If you are developer try using https://github.com/instagrambot/instabot (Instagram is part of Facebook). You will be blocked quite fast. For research and fun I did things more sophisticated than instabot and still got blocked. So Facebook as company is researching this area and is actively using whatever is possible. I'm pretty sure they will get much better in the future. In the end they want companies to buy ads and if competition gives better result than they do, they will lose.


Use selenium or puppeteer with stealth plugins and a real user agent and I suspect it will be A LOT harder for them to block you.


Selenium is still relatively easy to detect by JS, but on the other hand it does require some dedicated[0] effort even if just the user-agent is overriden and that itself might be enough to work for relatively long time. Other important thing is rotating over different IPs that appear realistic[1]

[0] dedicated in a sense of lets include some script specifically designed to detect Selenium. [1] for example it's kind of unlikely that non-bot visitor is using IP from a range used by amazon instances. not sure how often this is used but I assume that most bot-detection systems would use that information at least as one of the metrics


I have written browser extension.


how sophisticated is it? a browser extension sounds pretty high-level and limited to me. does it just use the host browser's user agent? or is it able to spoof multiple legitimate user agents, connect to fb via proxy servers, handle queueing, rotation, retirement, etc.? in which case how is a browser extension more useful than [scripting language of choice]?


There are a lot of JS-based detection techniques that rely on things being or not being available. By using an actual browser you have JS/WASM environment that is identical to what is expected from a real user. By using [scripting language of choice] you would need to emulate everything that a given bot-detection system tries to check.

From my very limited experience there are two main categories of websites:

1. Using curl and/or [library of your choice] in a [scripting language of choice] is enough

2. Forget about not being detected without a full blown browser unless you want to spend endless hours trying to emulate whatever is needed to be emulated and also willing to burn some accounts and IPs in the process.


A lot of questions here. In short I have tried to imitate how I browse Instagram myself but not too sophisticated. I think that higher sophistication might have helped a little bit. Scripting based solutions are detected in minutes, browser based solution was blocked only after 20+ hours. I have not used any proxies while I think it can be done - however that does not solve the problem.

Answering last question, it depends what do you mean by scripting language. Let's assume it is Python then you have two choices: imitate browser or risk to be detected as bot quite fast. Writing browser extension is quite easy and you can imitate real user quite easy. The only problem that you will have is to imitate real human being in a way that does not match Instagram's bot detection algorithm.


This doesn't tackle the click farms / click workers though...


You don't know that


If they still exist and still work...


I don't know, I would like to hear Vint Cerf do a talk on intelligence services. Particularly if the second half of the topic is spitballing solutions.


There are videos of him talking about the inception of the internet. In fact, he explains in detail how we wanted to be able to have mobile military units that could relay data back to HQ real time in enemy territory and that was why ARPA created the internet. I don't want to misquote him, but essentially he said something to the effect of it isn't like you can ask your enemy to set up a network for you just prior to invasion


Vint ain't never been the Freedom Rider, he's the cop (so to speak).


I think there's already an answer to this. In a large organization, such a pervasive and difficult problem can't be solved in a direct fashion, because there's always some weird alternate consideration that keeps it from getting tackled. Only a top-down initiative from C-level implemented across the organization has a chance at rooting it out, and if such an initiative existed we'd have heard about it and gotten yearly progress reports.

I think what we're seeing instead is the same as counterfeits or promoted search results at Amazon: solving it could become an existential threat to either popularity or profit, so they're just "managing" it as BAU.


I guess one alternative explanation would be that it's a hard problem, and Facebook doesn't know how to solve it. Or that current efforts aren't having good results.

In either case, Facebook has a financial incentive to say nothing, instead of publicizing lackluster results.

Or, my guess, they'd rather keep any news about the existence of this off the front pages, as broader knowledge of its existence by their (not as technically informed) advertising customers only negatively impacts Facebook.

Kind of how a beachfront town wouldn't want to advertise the fact that Great White sharks exist... at all.


I don’t buy it, we’re building ai that can drive, classify scans better than professional doctors, translate - but we can’t figure out fake accounts? Come on.


Driving, diagnosing, and translating are not good comparisons, because none of those are adversarial.

The problem here is that adversaries are adjusting to whatever measures are put in place. In that respect it might be more like winning at Chess or Go. Which computers can do, but it’s decidedly non-trivial.

And here’s the kicker: If we postulate some hand-wavium big-data/machine-learning/AI that can detect bots and adjust when the bots evolve, why can’t we also postulate some hand-wavium big-data/machine-learning/AI that can run bots and evade the bot detection?


Driving often is adversarial, and no AI can actually drive yet (outside of some very limited circumstances on a small number of roads).


It's probably not that they can't classify fake accounts but that doing so at a scale requires too large resources and it's more economical to employ only simpler techniques that weed out most of the bots and live with the remaining few.

Then if they also happen to anyhow benefit from the existence of those fake accounts then it becomes even harder to justify costly filters and leads to a sufficient-enough-to-not-look-obviously-weak grade filtering


If it's so problematic, they shouldn't have released the feature(s) in the first place. They unleashed a firehose (a hose that shoots fire) without any controls to turn it down, or off, or even to aim it away from the hospital full of babies. Controls that were well-known as common and standard for decades before Zuckerberg was first rejected by a girl.

Facebook is entirely out of control as a social force, and PR agencies are the only thing holding back public perception.


I'd assume these fake likes improve the metrics of most managers... "look at the improved user activity since rollout of X feature!"

There's probably not much incentivation to investigate them too deeply.


Or perhaps slightly more temporally related, how Vint Cerf feels about ISOC selling .org :)


I'm an engineer who recently started at Facebook. I can't speak for the org as a whole, but during the team selection process, the main themes were ads teams and integrity teams (fake account, scraping, security, etc. all fall under that umbrella), so it's something Facebook is taking as seriously as making money.

> what do they think of their employer still enabling massive disinformation, astroturfing etc?

This isn't a behavior management wants, and there's a lot of internal effort to reduce it.

Something the public doesn't get but engineers do is it's impractical to manually review every action on Facebook, human reviewers aren't necessarily more accurate than ML, and you'll always have some amount of abuse.

> Just honestly curious how they deal with these issues on their personal moral compass.

I have no qualms here. Roads are used during thefts, but no one asks construction workers how they sleep at night knowing the roads they build might facilitate crimes. Spammers gonna spam, but that doesn't mean we can't have any online platforms.


I was hoping a Facebook engineer would show up in this thread. If you don't mind, I have a couple of questions.

> Something the public doesn't get but engineers do is it's impractical to manually review every action on Facebook, human reviewers aren't necessarily more accurate than ML, and you'll always have some amount of abuse.

I'm curious what your thoughts are on other companies (Twitter, Spotify, etc) disabling political ads for this very reason. Facebook has not. It's a given that you can't manually review every political ad -- so why allow them at all, if their disinformation has negative real-world implications? I don't buy Zuck's argument about free speech.

Secondly, what would Facebook do if a grassroots movement started to put pressure on advertisers until Facebook cancels political ads? What if this movement recruited real users to click on lots of ads in their own feeds, with the goal of disrupting advertiser ROI with difficult to detect garbage clicks? Would advertisers get upset? Would Facebook have any recourse?

Thanks for indulging me on this. The idea came to me in the shower and I'm not sure if it's brilliant or stupid.


Disclaimer: I don't work for facebook. I don't speak for my company's policies either.

- You overestimate the zeal with which the grass-roots folk will engage in that behavior, especially if they know it is undercutting fb model. Ad tech is also evolving everyday meaning a reCAPTCHA like functionality around whether you are a real engagement vs these clicks aren't very far away. You should also look up the articles about # of twitter/reddit contributors (including likes) compared to the US population for example (VERY minimal).

- In theory you can politicize anything: Do you think it is possible to talk environmental/social/civic controls etc. today without having a political bent? Meaning the political ad blocking is probably not as comprehensive as the twitter block portrays.

- For all the negativity politics gets, this is something that impacts us very much in our day to day life. Shutting it out completely is probably more of a problem than working with the system. I don't think we can comprehend the macro impact in this subject: As it stands today, some companies saying no just means more money for other corporations.

As you can see I do have a very practical (read cynical) view towards how much mental bandwidth people have towards the small slights in life (which may have a giant impact later) compared to their day to day requirements. As lacking I am in solutions, I don't believe it is sledge hammer (stop ads) OR crowd sourced (let's all click on all ads).


> a reCAPTCHA like functionality around whether you are a real engagement vs these clicks aren't very far away

That's gonna be a no for me dawg. If it turns out to be content I want to see badly at all, I'll search for statistically improbable strings in the part that I can see so I can find an alternate source (/r/savedyoucompletingacaptcha?).


> so why allow them at all, if their disinformation has negative real-world implications? I don't buy Zuck's argument about free speech.

Why don’t you buy it? If you ban “political” ads you are going to start drawing a lot of arbitrary lines. Is an ad for a climate change organization a political ad? How about an ad for UBI? An ad for a local Catholic Church? An ad about farming subsidies? An ad for birth control?

All of these are one step away from direct advertisements for candidates and are very political topics for many people. Twitter hasn’t actually banned political ads, they’ve just used a definition that makes it easy for them to claim to have done so.


You make a good point about how to actually enforce this. Apparently, Washington state did ban political ads on Facebook. As a testing ground for such a policy, the results haven't been great.

https://www.theverge.com/2019/10/31/20941917/twitter-politic...

HOWEVER, this appears to me as a regulation failure. We know Facebook doesn't want the ban, so their motivation to comply is limited to the clarity and sharpness of the teeth of the legislation. And they aren't very sharp.

Regarding:

> Twitter hasn’t actually banned political ads, they’ve just used a definition that makes it easy for them to claim to have done so.

I don't agree with painting Twitter as just wanting to "claim" they have done so. They appear to be making a true good-faith effort. Check out their policy:

https://business.twitter.com/en/help/ads-policies/prohibited...

https://business.twitter.com/en/help/ads-policies/restricted...

> Is an ad for a climate change organization a political ad? How about an ad for UBI? An ad for a local Catholic Church? An ad about farming subsidies? An ad for birth control?

For each of these examples, there is a clear way to apply their policy based on the content of the message. Is it perfect? Probably not. Will it totally kneecap political ads (by 80%+)? I believe it will.

> All of these are one step away from direct advertisements for candidates and are very political topics for many people.

I would argue that your point is too academic. If political ads are reduced by 80%, even though there are still political-adjacent ads (that aren't funded by a political group and don't reference a candidate or initiative), then the policy would be a wild success.


I think the general theme I see is that folks expect companies to solve problems that their governments must be solving. And when they don't get it uniform, everyone's mad.

I'd rather live in a world where companies aren't trying to push their morals on me and have a central entity (govt) arbitrate the same (Believe me I see evil in both places). I have been thinking on and off about the role of government in the current world and sadly I can't see a place where it can be as tiny as people want.


>I think the general theme I see is that folks expect companies to solve problems that their governments must be solving. And when they don't get it uniform, everyone's mad.

People want an outcome and don't particularly care where it comes from, public vs. private. If one fails, they'll push on the other to find a leverage point. Example: A pundit can say some truly awful and damaging shit and it's legal under the law. But if you target their advertisers, that's the leverage point that matters.

>I'd rather live in a world where companies aren't trying to push their morals on me and have a central entity (govt) arbitrate the same (Believe me I see evil in both places).

A company should be free to push its morals on you, given that the market allows it, and their morals aren't illegal. Right?

Personally, in the current environment of profit-at-all-costs capitalism, the major flaws seem to be incentivizing short-term-thinking and negative externalities. When a company flexes morals that appear at odds with short term profit (e.g. Twitter), I tend to assume they are actually acting out of self-interest but are better able to grasp the long vs. short-term incentives, for whatever reason.


I hate Facebook. For its user tracking practices, their attention hacking, and their UX is terrible too. I only use Facebook because others post events on it, and I can't convince these people to use open alternatives.

How can you feel proud about such a product?

PS: Yesterday I got an email from Facebook saying that I had 4 messages waiting for me. I opened the app, it showed a little balloon with "4" in it. I clicked it, and there were no messages ... sigh.


>Yesterday I got an email from Facebook saying that I had 4 messages waiting for me. I opened the app, it showed a little balloon with "4" in it. I clicked it, and there were no messages

I also get this, but now it's always stuck at some arbitrarily high number. in the past few years I've reduced my fb usage to a couple of minutes per month, down from a few minutes per day. I'm sure it's underhand tactics like these (tricking a billion dormant users) which upholds their claim of 2 billion "active" users or whatever bullshit number it is


Such notifications (email, text) can be disabled in the settings.

And you can visit the app/web when you feel like, not when they want.


You can disable receiving emails in general, sure, but you can't disable the fake/lie notifications or the daily "please pay us $XX to promote your page"


Frankly I don't understand this kind of "calls to action" directed to company X employeese.

First of all, it is not like FB employees are pushing people into gas chambers in Dachau, FB usage is not obligatory, what I keep telling everyone who complains about FB censorship or privacy abuse - I don't have FB account because of that.

In the same way we might ask to step out Coca Cola employees (say, delivery truck drivers), because drinking Coke is bad for ones health. Or HSBC employees because HSBC was laundering narco cartels money? Or John Deere employees because company forbids farmers to modify tractor software?

All of those practices are immoral and bad, but why chase the weakest, whose income, ability to pay rent, etc. depends on the employer? Why not target those, who are really responsible for that what is happening and who make huge money thanks to that?

I would say it makes much more sense to vote with our money, avoid services and products from companies we consider immoral.

Publicly discourage people from using such services and products, publicly stand against CEOs, shareholders of those companies, spread the knowledge about their personal responsibility for such kind of behavior - this is not that difficult and actually can make a difference. Imagine PR outcome of a conference that would invite Mark Zuckerberg, but no one else would want to attend it? Mark shows up on, say, TED, and all the people leave the room during his talk? In the media-driven World surely this would become "viral" and even Mark with all his money couldn't ignore that easily.


> privacy abuse - I don't have FB account because of that.

This is just the obligatory reminder than not having a FB account does not stop FB from spying on you.


>FB usage is not obligatory, what I keep telling everyone who complains about FB censorship or privacy abuse - I don't have FB account because of that.

Wow are you saying that people should take responsibility for their actions and should let others choose freely what to do?


Shadow profiles, anyone? The idea that Facebook does not attempt to spy on former or non-users is laughable.


Sounds like privilege.

/s


> Frankly I don't understand this kind of "calls to action" directed to company X employeese.

Because if you make enough noise, and affect the share price enough, stuff changes.

Also, A lot of developers seem to think that their actions have no consequences. This includes people at facebook.

When it is demonstrated that something you have designed, or your team has designed its a massive pile of shit, in a very public way, it leaves a mark. Hopefully for the better.

Having been through something similar, It certainly changed the way I make prototypes. Security and anonymity comes first now, not last.


>First of all, it is not like FB employees are pushing people into gas chambers in Dachau,

WTF ?

> why chase the weakest, whose income, ability to pay rent, etc. depends on the employer?

I assume you are not talking about FB employees which is a shame because it would be nice to see the same employees take some responsibility for the code they produce.


A programmer takes responsibility for their work but they are not responsible for what you think they are.

A programmer doesn't get to determine what the company is building unless it is so small and even then it is rare.

Target product managers.


> A programmer doesn't get to determine what the company is building

But a programmer does get to determine whether or not they'll continue to work on what the company is building. Just saying...


Why not do both? Shame the company and shame the users.


“I was only following orders.”


>First of all, it is not like FB employees are pushing people into gas chambers in Dachau

I laughed at this way louder than I'm willing to admit


Good lord, I know tech workers have a reputation for abhorring responsibility, but this defense of amorality is breathtaking.

Don’t bother the engineers...no, like this post so people will walk out of Zuck’s TED talk. Right out of the Onion!


I'm assuming the ones in denial will say something like "It's up to the individual to evaluate the quality of the content e̶v̶e̶n̶ ̶t̶h̶o̶u̶g̶h̶ ̶w̶e̶ ̶e̶n̶g̶i̶n̶e̶e̶r̶ ̶o̶u̶r̶ ̶p̶l̶a̶t̶f̶o̶r̶m̶ ̶t̶o̶ ̶g̶e̶n̶e̶r̶a̶t̶e̶ ̶a̶n̶ ̶i̶n̶s̶t̶a̶n̶t̶a̶n̶e̶o̶u̶s̶ ̶e̶m̶o̶t̶i̶o̶n̶a̶l̶ ̶r̶e̶s̶p̶o̶n̶s̶e̶ ̶t̶o̶ ̶m̶a̶x̶i̶m̶i̶z̶e̶ ̶e̶n̶g̶a̶g̶e̶m̶e̶n̶t̶."


Every time yet another scandal comes out of Facebook, Google, etc... I think that the 1985 film Real Genius still holds up today.

The TL;DW version is that a young, idealistic tech guy learns the hard way that not everyone is as idealistic as he is about technology when a greybeard informs him that the project he just finished is a weapon.

https://www.imdb.com/title/tt0089886/

Silicon Valley needs more Laszlo Holyfelds.


Why would they even have a response? The advertisers don't really care.

Everyone knows that most FB traffic is garbage, but it's easy to sell garbage. And in a few limited circumstances the micro-targeting works.

Even if it turned out that most FB/IG traffic was fake it wouldn't change anything until someone comes up with something better.

You have to remember how the ad ecosystem works. Until FB came along there was nothing but display and search and the gap was huge. Facebook comfortably occupies the entire middle ground between display and search these days. Even if half the traffic is totally bogus it's still going to be better than display.

Every big company pretends to work on these problems for image reasons, but they don't really care. It's always a "special team dedicated to" whatever the flavor is that reports to a kangaroo court.

If companies actually cared about these things they could use their immense engineering talent to fix them, or just prevent it in the first place. But BOTH of these mean less revenue at the end of the day. The only way companies would actually prioritize these things is if it had a positive impact on revenue somehow, but it doesn't. Never will.

I'll never understand where the idea came from that Facebook had some kind of moral compass as a company that was anything other making money.


Any thoughts on alternative approach they could use--what would it look like etc? Would there be any undesirable consequences of a different approach?


Approach to what? User registration? Tracking fake accounts?


> Very curious about what Facebook's response is to this (outside the response mentioned in the talk, which is clearly not sufficient)

i.e. what would be sufficient in your view?


You could go down the route of what Slashdot did back in the day. "Likes" were something you could only give out after getting them. So you merely passed them along (like money in a market, sorta). And getting those likes was difficult and hard to game. It was a sort of "pay it forward" system with random batches of "likes" being seeded into the community based on some sort of measurable behavior, after which the community would recycle it internally based on the like being a token of "reward".

Whereas with current social media. Likes are given to users with out any care other than "hey you seem to be a valid user, let's allow you to generate large amounts of likes".


Gotcha. Well, I'm not really sure. Maybe just making likes private or banning likes (as suggested in the talk).

Or banning political advertising completely (see the part in the talk on political parties in Germany buying likes).

Or something that completely destroys meddling and fraud and is observable by the public. The "we are working on it behind the scenes"-response is just not sufficient in my view.


What is surprising is your assumption Facebook isn't aware of this. Nay, that Facebook was not designed for this very purpose. Facebook, who employs some of the top data analysts and AI specialists, actively building tooling on the cutting edge of those fields, who's funded primarily by commercial use of its platform, with the for-pay 'reach' being the most obvious monetization scheme.

I don't see how you could be missing that everything about Facebook incentivizes nonsense such as like-factories.


This kind of poisonous invitation to grandstand might as well be grandstanding in and of itself, and all of these performative paper ethics makes me want to puke.


> their personal moral compass

"It is difficult to get a man to understand something when his salary depends on his not understanding it"

Personal morals are just that: personal. Two people can have 100% conviction in their moral position and be totally opposed, it happens all the time in social issues.

Moral beliefs also happen to line up with social circles and group interests suspiciously well too (hence the Sinclair quote).


If Facebook were to defeat astroturfing, all advertising firms would just promote their new social media network on all the major news networks.

Our economy is founded upon mutual relationships of grift and greed, even if Zuckerberg were to disagree with his own platform, he would not be able to change it.


what do you think about the oversea slavery that made your clothes and your phone?

to be clear, I'm not blaming you as an individual, just honestly curious about your moral compass.


[flagged]


For a lot of employees, Facebook pays enough that paying your rent should not be an issue after a couple of years working there if you're in any way financially responsible.


That's what throwaway accounts are for.


ok... but this is kind of a problem that Facebook wants to fix also, so where is the conflict with "Zucc" ?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: