A mix of soft science stuff this month, not that much going on it seems.
- We start not with a new paper but a retraction of a particularly famous one. The original 2012 paper about how people who were made to sign an honesty declaration were subsequently less likely to commit fraud was widely publicized and actually put into practice by various governments. Subsequent attempts to replicate this effect however failed and the researchers involved now acknowledge that the data it was based on seems to have been faked. The most famous of the scientists involved is Dan Ariely who claims the data came from an insurance company but refuses to name the company. This is still an ongoing case which threatens to completely destroy Ariely’s reputation and body of work.
- Next we have an economics paper that questions the effectiveness of television advertising. Based on a study of 288 brands, the authors conclude that such advertising has a negative rate of return for more than 80% of those brands. That’s a lot more than the more commonly cited figure of around 50% of advertising spending being wasted but I can’t speak for the quality of this paper.
- This next article seems highly speculative to me, but it’s worth knowing about it. It claims that as people interact and cooperate with each other, the oscillations of their neural activities appear to synchronize. The call is for a wider understanding of the phenomenon of consciousness and to acknowledge that the boundaries of the self are subject to negotiation with the environment as well as other people. This isn’t completely kooky science. We already know that the mind is what the entire body does, not just the brain, but this way of looking at things does cast the net even wider.
- Finally, here’s is a longer read released by DeepMind which is now owned by Google about their efforts to create an AI capable of open-ended learning. We’ve all heard by now about AI being able to beat humans at kinds of game from chess to Starcraft II but these are single-purpose AIs trained on a specific set of data to handle a specific challenge. This article talks about having general purpose AI inhabit a 3D virtual world and learning to navigate and accomplish tasks within that world. There are plenty of pictures too, covering all kinds of thing that the AIs need to figure out without being specifically programmed to do so. It makes for a fascinating read especially as each agent in the virtual world learns to interact with other agents.