Our “TrustIt Labs” news-source credibility analysis project has been flying mostly beneath the radar, so it’s clearly time for another update, and an explanation into our thinking as this research and development project proceeds.
The reality is that the medium in itself should not be questioned when it comes to assessing credibility because the Internet is simply vast enough to contain all media
In case you hadn’t noticed, we started out with the working name of “TrustIt?”, but removed the question mark, and expanded the name to include “Labs” to be more representative of the overall project. Regarding that nixed question mark, it made the ends of sentences unnecessarily confusing, and removing it was in a way symbolic of how we on the project team are trying to remove the ambiguity regarding what is and is not a trustworthy news source. We figured that you should at least trust TrustIt once all is said and done.
The road to developing a solid toolkit for news-source credibility assessment thus far has proven confusing, complicated, and hard-to-grasp at most, if not all, times. It also has shown itself to be rather reminiscent of other human emotional experiences. After all, what are we doing but trying to manage trust issues among the news-consuming public? Aren’t we simply trying to mitigate the changes to our perceptions of credibility that were incited by the Internet? Aren’t we trying to involve ourselves in the psychological inconsistencies among news consumers, who have contradictory perceptions of credibility which are becoming harder and harder to manage in our overflowing pools of information?
I trust Wikipedia (for the most part) to be a credible source of information. In fact, here is an excerpt from our trustworthy community-edited intellectual companion regarding the sociological concept of “trust”:
In a social context, trust has several connotations. Definitions of trust typically refer to a situation characterised by the following aspects: One party (trustor) is willing to rely on the actions of another party (trustee); the situation is directed to the future. In addition, the trustor (voluntarily or forcedly) abandons control over the actions performed by the trustee. As a consequence, the trustor is uncertain about the outcome of the other’s actions; he can only develop and evaluate expectations. The uncertainty involves the risk of failure or harm to the trustor if the trustee will not behave as desired.
Trust can be attributed to relationships between people. It can be demonstrated that humans have a natural disposition to trust and to judge trustworthiness that can be traced to the neurobiological structure and activity of a human brain, and can be altered e.g. by the application of oxytocin.
Conceptually, trust is also attributable to relationships within and between social groups (families, friends, communities, organizations, companies, nations, etc.). It is a popular approach to frame the dynamics of inter-group and intra-group interactions in terms of trust.
When it comes to the relationship between people and technology, the attribution of trust is a matter of dispute. The intentional stance demonstrates that trust can be validly attributed to human relationships with complex technologies. However, rational reflection leads to the rejection of an ability to trust technological artefacts.
One of the key current challenges in the social sciences is to re-think how the rapid progress of technology has impacted constructs such as trust. This is specifically true for information technology that dramatically alters causation in social systems.
In the social sciences, the subtleties of trust are a subject of ongoing research. In sociology and psychology the degree to which one party trusts another is a measure of belief in the honesty, fairness, or benevolence of another party. The term “confidence” is more appropriate for a belief in the competence of the other party. Based on the most recent research, a failure in trust may be forgiven more easily if it is interpreted as a failure of competence rather than a lack of benevolence or honesty. In economics trust is often conceptualized as reliability in transactions. In all cases trust is a heuristic decision rule, allowing the human to deal with complexities that would require unrealistic effort in rational reasoning.
For a while, I was concerned that “TrustIt” was a bit of a misnomer for our project, considering “trust” is only one component of credibility. However, I am coming to realize that the role of TrustIt within the sphere of credibility might be to help alleviate the confusion brought to our perceptions of trust now that information is increasingly less attached to tangible materials such as newsprint or local, human sources. In my research, I’ve come across much evidence that humans simply place less trust into (or in other words, assign less credibility to) information found online than from among print sources. But how logical can it be, for example, for a physician to disregard a patient’s externally gathered medical information when a patient can now access both WebMD, Yahoo! Answers, and the National Institute of Health’s research results? The reality is that the medium in itself should not be questioned when it comes to assessing credibility because the Internet is simply vast enough to contain all media.
At the very least, maybe we can help some eighth graders find decent sources for their papers and help their parents find reliable sources for their morning news
On the other hand, we CAN question our sources. And, when we’re able, we can and should always question the content. It’s a much easier feat, for example, to distinguish among WebMd, Yahoo! Answers, and the NIH as sources of medical information than it is to distinguish among the New York Times, the Hollywood Reporter, and the blog curated by one’s cousin — especially if that cousin has a knack for criticizing foreign affairs in a matter-of-fact manner and spent a decade working for the Associated Press. But that’s what we’re trying to do.
We’re trying to mathematize a socio-psychological struggle in the sense that we want to help people better manage their feelings of trust when it comes to assessing digital media sources. On the other hand, we’re trying to optimize people’s information-seeking performance and help them get more useful results, and faster, when they’re searching for content that passes a certain measure of public credibility. And, at the very least, maybe we can help some eighth graders find decent sources for their papers and help their parents find reliable sources for their morning news, lest an article from The National Enquirer be misattributed to The National Review over some water-cooler banter later on at work.
One of the clearest messages I hear from the mounds of pre-existing research on the matter vocalizes this: “People put their trust and trust issues in many wrong places!” This sentiment rings double for youth, research shows.
So our process is getting more and more complicated.
First, we want to rile people up about their criteria for trustworthy news content and question their decision-making so that we can understand how they come to trust or distrust different aspects of and sources of media content.
Second, we want to figure out how to box these processes into a formula that can predict their decision-making and improve upon it when it comes to choosing news sources.
Third, we want to free them of the burdens of over-thinking alongside the consequences of under-thinking when they read news content by helping steer their decision-making in whatever direction they would want to head.
Our motto might as well be: “Got trust issues? Let us help!”
We need to understand that the quality of (all) this information (now available to us) varies much more than ever before
It would appear that the issues people have with trusting other people are colliding with a whole new set of issues relating to trusting technology. And when it comes to assessing source credibility, that same trust has to be stretched to higher and higher degrees of disconnection between the information people come across and its human sources. As we, as a society, try to adjust to the fact that there is more information available to us at our fingertips than we could process in a lifetime, and much more information in general than ever before, we also need to understand that the quality of this information varies much more than ever before. Filtering is largely up to the consumer now, whether it is practiced through the development of cognitive heuristics, or simply by avoiding all but a few total sources of information. It’s no wonder, then, that people have issues identifying credible information.
If we constantly struggle to trust others both in personal interactions or in foreign affairs that take place among nations, and we struggle to trust information whose sources are not well known to us, and even more so struggle to trust information that is online and essentially unfiltered, organized by machines for our access, then it’s due time that we invent some tools to make sure we can manage all of these trust issues before we flee from pursuing our intellectual curiosities in general or forfeit our decision making to only a handful of sources as a way to lessen the informational burden.
The problem is simple enough: We have more information available to us than our brains can cope with. Furthermore, our decision making when it comes to choosing information that we consider worthy of our trust — the interplay of a series of psychological, sociological, historical, and cognitive issues — comes to further hinder and complicate our efforts.
It’s due time we built some coping mechanisms for this problem.