Candid smartphone app
© BeCandid
Hall 9000 it is not but Candid sports a sleek and "infallible" AI that analyzes your messages to make sure they're nice, it also makes sure they're true. We wouldn't want to have the wrong opinions, now would we.
A curious thing happened on the way to losing my freedom of speech: A whole bunch of YouTube vloggers, normally known for their fanatical libertarian views, started shilling for Candid, a slick new smartphone app that purports to encourage free and anonymous speech. Of course, Candid is anything but candid - it should be renamed to Censorship.

Don't take my word for it, here is BeCandid founder Bindu Reddy talking about the technology and AI of c.


It seems Bindu (a Google alum - what a surprise) is really bored with those daily living tasks, and can't wait for computer AI to take over things like remembering important dates (though they don't seem important enough to remember) or driving to work.

Hrrm: An NLP (as in Natural Language Processing, not the other kind) algorithm? The most worrying thing about this video is that this AI supposedly tries to detect sentiment. Humans have trouble accurately detecting sentiment in written word, because it doesn't communicate very well at all. Anyone who has grown up on the internet knows that no matter how many emoticons and emojis you use, misunderstandings happen. What Bindu and Candid are suggesting is an AI system that knows what you mean, perhaps better than you.

Well, what kinds of sentiments you might ask? I wonder if we could find an official representative who could give us an idea of what kinds of "hate" speech and "sentiment" the system is going to filter out?

spenxer from bcandid
© Product Hunt
Text of image transcribed:

Ashley Meyer: @jellywish @sachinag all this really means is that loads of nonabusive posts are going to get removed and real abuse is going to stay up for days or weeks. unless (hopefully?) by AI you mean a team of attentive humans.

Spenxer Janyk: @ashley_meyer @sachinag Hey Ashley! We don't believe in harassers, meninists, abusive bullshit, etc. They can expect to get kicked out of Candid and will get their comeuppance during the revolution. We have human moderation as well. :-)
It seems that Candid is listed on Product Hunt and they've sent Spenxer along to be a Product Representative. The hilarious thing is, he uses meninist as an example of someone who would be banned. If you aren't in the know, the meninist movement is a parody of feminism where its "adherents" replace words like "women" with "men", and "patriarchy" with "matriarchy" in Feminist writings. Apparently Candid does not think that's funny - apparently it's "hate speech"? A little much. Obviously Spenxer is trying to appeal to the commenter Ashley Meyer, and he expects her to be a Feminist. No dice, she in epic style responds:

spenxer becandid
© Product Hunt
Text of image transcribed:

Ashley Meyer: @jellywish @sachinag "comeuppance"?? I think it's very dangerous to freedom of speech to overzealously take down any content just because lot of users flag it, so if that's your strategy I'd like to know about it before I move my social presence into the app.

Spenxer Janyk: @ashley_meyer @sachinag Flagging is only one aspect of the moderation involved. Our goal is to keep the Candid community safe and congenial.

We agree that freedom of speech is an important constitutional right, but we don't think that means private companies should be required to promote or tolerate online harassment. We'd love for you to participate in the Candid community! Please let me know if there's anything more I can do to help address your concerns.
A Washington Post Business article about Candid seems to spell it all out.
Candid's secret sauce is in its artificial intelligence moderation, which aims to weed out bad actors by analyzing the content of posts and keep hate speech and threats off the network. It also has other interesting features: For example, its algorithm tries to weed out false information by marking certain items as "rumor" or "true." Conversations on the network — even about politics and other controversial topics — are, by and large polite. That's a good sign and shows that the algorithm seems to be working.
I'm sorry, Tyrant says what?

So questions about 9/11 would be marked as? Information on Hilary Clinton, or Donald Trump would be?

The question you may be asking is: How does this affect me? The answer to that question is scary. AI is really a sham, it's about clever algorithms and tons and tons of human input for it to "learn from." The specifics of their algorithm are irrelevant, what is relevant is their data source: YOU. That's right, any person using this app to say anything, is essentially contributing to the program that will one day decide what they can and cannot say online. Should their system prove successful, access to the algorithm as a service will most likely be sold, or worse required, for online social interactions. When that happens, your ability to speak freely will be dead, via the back door of private corporate prior restraint. No constitutional rights violated, but effective censorship: Achievement Unlocked.