Tag Archives: Google

Google segregation should take blame

Continuing from the last post, here is a concrete example of segregation performed by the corporate social media. The result of the US election is a consequence of this phenomenon.

Yesterday I posted on Google+ the article Donald Trump is moving to the White House, and liberals put him there | Thomas Frank | Opinion | The Guardian    and I received an anti-Trump comment (reproduced at the end of this post). I was OK with the comment and did nothing to suppress it.

Today, after receiving some more comments, this time bent towards Trump, I noticed that the first one disappeared. It was marked as spam by a Google algorithm.

I restored the comment classified as spam.

The problem is, you see, that Google and Facebook and Twitter, etc, all corporate media are playing a segregation game with us. They don’t let us form opinions based on facts which we can freely access. They filter our worldview.  They don’t provide us means for validation of their content. (They don’t have to, legally.)

The idiots from Google who wrote that piece of algorithm should be near the top list of people who decided the result of these US elections.


UPDATE: Bella Nash, the identity who posted that comment, now replies the following:

“It says the same thing on yours [i.e. that my posts are seen as spam in her worldview] and I couldn’t reply to it. I see comments all over that  google is deleting posts, some guy lost 28 new and old replies in an hour. How the hell can comments be spam? I’m active on other boards so I don’t care what google does, it’s their site and their ambiguous rules.”

Screen Shot 2016-11-11 at 10.47.16.png

Theory of spam relativity 🙂


To be clear, I’m rather pleased about the results, mainly because I’m pissed beyond limits by these tactics. This should not limit the right to be heard of other people, at least not in my worldview. Let me decide if this comment is spam or not:

“In Chicago roughly a thousand headed for the Trump International Hotel while chanting against racism and white nationalism. Within hours of the election result being announced the hashtag #NotMyPresident spread among half a million Twitter users.

UPDATE 2: Some people are so desperate that I’m censored even on 4.chan 🙂 I tried to share there this post, several times, I had a timeout. I tried to share this ironical Disclaimer


which should be useful on any corporate media site, and it disappeared.

The truth is that the algorithmic idiocy started with walled garden techniques. If you’re on one social media site, then it should be hard to follow a link to another place. After that, it became hard to know about people with different views. Discussions became almost impossible. This destroys the Internet.

Which side you take: ASAPbio or Alphabet?

Times are changing fast and old time thinking dies hard and ugly. We have winners and losers.

Winners side: the #ASAPbio hashtag signals that biologists are ready to adopt the arXiv model of research communication.

“On Feb. 29, Carol Greider of Johns Hopkins University became the third Nobel Prize laureate biologist in a month to do something long considered taboo among biomedical researchers: She posted a report of her recent discoveries to a publicly accessible website, bioRxiv, before submitting it to a scholarly journal to review for “official’’ publication.” [source: NYTimes]

OK, many people are still confused about “preprints”, mainly because the fake open movement Gold OA enshrined this distinction preprint-postprint into the brains of honest researchers looking for Internet Age ways of communication. The goal was, without doubt, to throw a shade over the well known arXiv model (which used the name “eprints” before Gold OA was a thing). And to preserve for a while longer the obsolete print business (as witnessed by their immediate association with the legacy publishers against Sci-Hub). But now, with bioRxiv, there seems to be a historical shift.

Biologists are already leaders into imagining ways to share and validate big volumes of data. They are certainly aware that sharing all data, aka Open Science, is a necessary part of the scientific method.

The scientific publishing industry, a largely useless business in the Internet Age, is the only loser in this story.

Losers side: Alphabet, the mother company of Google, sells Boston Dynamics. To be clear, BD is one of those research groups who does real impact, hard research. According to Bloomberg:

“Executives at Google parent Alphabet Inc., absorbed with making sure all the various companies under its corporate umbrella have plans to generate real revenue, concluded that Boston Dynamics isn’t likely to produce a marketable product in the next few years and have put the unit up for sale, according to two people familiar with the company’s plans.”

I am sure that there are all sort of reasons for this move. Short term reasons.

Why is this important, for researchers: because it shows that it is time to seriously acknowledge that, technically:

  • we need more data and access to research data, not more filters and bottlenecks in the way of research communication
  • and research data on the Internet is a particular kind of big data. Not even very big compared with the really big data collected, archived and used today by big commercial companies. It is technically possible to manage it by those who are the most interested, the researchers. This is not a new idea. Besides arXiv, and now bioRxiv,  see for example Bjorn Brembs  calls for a modern scientifc infrastructure.
  • Big commercial companies are not reliable for that. At any point they might dump us. Social media venues for research news and discussions are great, but the infrastructure is not to be trusted when it comes to the management of research data.



Another chemlambda reduction and a weird Google bug


This is the end of the computation of the monus function monus(a,b)=a-b if a>=b, else 0, for a=3 and b=20.

It is the last part of the computation, as recorded at 8X speed from my screen.

If you want to see all of it then go to https://github.com/chorasimilarity/chemlambda-gui/tree/gh-pages/dynamic , clone it and then type: bash moving_alt.sh and choose monus.mol from the list of mol files.

It is used the deterministic algorithm, this time, which makes all possible moves every time.

What is interesting about this computation is the quantity of trash it produces.

Indeed, I took a lambda term for the monus, applied it to 3 and 20 (in Church encoding), then applied it to successor then applied it to 0.

In the video you see that after the result (the small molecule connected to the blue dot, it i sthe chemlambda version of 0 in the Church encoding), there is still a lot of activity of destroying the rest of the molecule. It looks nice though.

Something strange happened when I tried to publish the video (there are so many strange things related to the dissemination of chemlambda that they become a family joke, some of the readers of this blog are aware of some of those).

After I completed the description I got the warning shown in the following video:  “Brackets aren’t allowed in your description”


For example this previous video has brackets in the description and in the title as well

and all worked well.

Anyway I eliminated the brackets but the warning remained, so eventually I got out from my google account and done something else.

Sometimes later I tried again, experience described in this video (which took a LOT of time to be processed)

At the beginning the fields for title and description were void and no error was announced.

After I filled them happened what you see in the video.

I understand that Google uses a distributed system (which probably needs lots of syncs, because you know, that is how intelligent people design programs today), but:

  • the “brackets are not allowed” is bogus, because a previous video worked perfect with brackets in the description,
  • the “unknown error” means simply that there was some trace left on my computer that there was an imaginary error in the previous try, so instead of checking if there is an error this time, I suppose that the browser was instructed to read from the ton of extremely useful cookies google puts in the user’s place.


The Ackermann function in the chemlambda gui

UPDATE: The Ackermann function, the video:


I put this stuff on G+  some days ago and now I can find it only if I look for it on purpose. Older and newer posts, those I can see. I can see colored lobsters, funny nerd jokes, propaganda pro and con legacy publishing, propaganda hidden behind granpa’s way of communicating science, half baked philosophical ideas,  but not my post which I made only two days ago. On my page, I mean, not elsewhere.

Thank you G+ for this, once again. (Note not for humans: this was ironic.)  Don’t forget to draw another box next time when you think about a new algorithm.

A non-ironic thanks though for the very rare occasions when I did met very interesting people and very interesting ideas there.

OK, to the matter, now. But really, G+, what kind of algorithm you use which keeps a lobster on my page but deletes a post about the Ackermann function?

UPDATE: The post is back in sight now. Whew!
The post follows, slightly edited (by adding stuff).
The Ackermann function is an example of a total computable function which is not primitive recursive. It is therefore amusing to try to compute it.
The matter is not what is the value of Ack(m,n), because it grows so fast that very soon the problem of computing it is shadowed by the more trivial problem of storing its values. Instead,  more interesting is to see how your computing device handles the computation itself, things like stacks of calls, etc, because here it is manifest the fact that Ack is not primitive recursive.
To simplify it, the funny thing is to see how you can compute Ack(m,n) without any cheat.
I tried to do this with #chemlambda . I know since a long time that it can be done, as explained (very summary, true) in this old post
for GLC, not chemlambda (but since chemlambda does with only local moves what GLC does, it’s the same).
I want to show you some pictures about it.
It is about computing Ack(3,2). Everybody will point that Ack(3,2) = 29 and moreover that Ack(3,n) has an explicit expression, but this would be cheating, because I don’t want to use precomputed stuff.
No, what I want to use is a lambda calculus term for the Ackermann function (and one which is not eta reduced, because chemlambda does not have eta reduction!), and I want to apply it to the arguments 3 and 2 written as lambda terms as well (using the Church encoding). Then I want to see if after the reductions performed by the algorithm I have I get 29 as a Church number as well.
During all the algorithm I use only graph reductions!
After all there are no calls, no functions and during the computation the molecules which appear are not even representing lambda terms.
Indeed, lambda calculus does not have operations or concepts like fanin nodes, or FOE nodes, not reductions like FAN-IN or DIST. That’s the amazing point (or at least one of them), that even if it veers outside lambda calculus, it ends where it should (or better, that’s for another time).
I used the programs which are available at the site of the chemlambda gui http://imar.ro/~mbuliga/gallery.html
(which is btw online again, after 2 days of server corruption.)Here are some pictures.The first one is a screenshot of the Ack(3,2) “molecule”, i.e. the graph which is to be reduced according to the chemlambda rules and according to the reduction strategy which I call “viral”.
After almost 200 reductions I get 29, see the second figure, where it appears as the molecule which represents 29 as a Church numeral.
ack_3_2_finalWow, it really worked!
You can try it for yourself, I’ll give you the mol file to play with, but first some details about it.
I modified a bit the awk script which does the reductions, in the following place: when it introduces new nodes (after a DIST move) it has to invent new names for the new edges. In the script which is available for download the algorithm takes the max over all all ports names and concatenate it with a word which describes where the edge comes from. It is good for being able to track back where the nodes and edges come, but it results into a growth of the ports name which is exponential in the number of iterations of the reduction algorithm. This leads to very big names of some ports, after 200 iterations.
So I modified this part by choosing a naming procedure which is less helpful for tracking but better in the sense that now the growth of names is linear in the number of iterations. It is a quick fix, after all it is as easy to invent naming procedures which result in a logarithmic or almost constant length wrt the number of iterations.
For the Ackermann function the script which is available is just as good, it works well, only that it has this unpleasant feature of long names which enlarges unnecessarily the json files.
Details over, next now.
In the third picture you see the mol file for the Ack(3,2), i.e. the list of nodes and ports of the Ack(3,2) molecule, in the format used by the reduction program.
Btw, do you see in this screenshot the name of the updated script? Right, is foe_bubbles_09_10.awk, instead of foe_bubbles_06_10.awk which is available for download.
I don’t cheat at all, see?
I made some annotations which helps you to see which part corresponds to the Ackermann function (as a lambda term translated into chemlamda), which parts are the arguments “3” and “2”, and finally which part represents the Ackermann function applied to (3,2).
Soon enough, when I’ll be able to show you animated reductions (instead of the steps of reduction only), I think such an example will be very funny to examine, as it grows and then shrinks back to the result.




Bayesian society

It is maybe a more flexible society one which is guided by a variable ideology “I”,  fine-tuned continuously by bayesian techniques. The individual would be replaced by the bayesian individual, which forms its opinions from informations coming through a controlled channel. The input informations are made more or less available to the individual by using again bayesian analysis of interests, geographical location and digital footprint (creative commons attribution 2.0 licence, free online), closing the feedback loop.