March 30, 2017
What people believe, sometimes because real because of that.
1. Introduction. People are driven by beliefs and assumptions. We all make assumptions and use simplified thinking to cope with complexities of daily life. These include stereotypes, heuristical decision-making, and many forms of cognitive biases we’re all subject to. Because information individuals have is inherently limited as are their cognitive capabilities, our way of rational thinking is naturally bounded (Simon, 1956).
2. Belief systems. I want to talk about what I call “belief systems”. They can be defined as a form of shared thinking by a community or a niche of people. Some general characterizations follow. First, belief systems are characterized by common language (vocabulary) and shared way of thinking. Sociologists could define them as communities or sub-cultures, but I’m not using that term because it is usually associated with shared norms and values which do not matter in the context I refer to in this post.
3. Advantages and disadvantages. Second, the main advantage of belief systems is efficient communication, because all members share the belief system and are therefore privy to the meaning of specific terms and concepts. The main disadvantage of belief systems is the so-called tunnel vision which restricts the members adopting a belief system to seek or accept alternative ways of thinking. Both the main advantage and the main disadvantage result from the same principle: the necessity of simplicity. What I mean by that is that if a belief system is not parsimonious enough, it is not effective in communication but might escape tunnel vision (and vice versa).
4. Adoption of belief systems. For a belief system to spread, it is subject to the laws of network diffusion (Katz & Shapiro, 1985). The more people have adopted a belief system, the more valuable it becomes for an individual user. This encourages further adoption as a form of virtuous cycle. Simplicity enhances diffusion – a complex system is most likely not adopted by a critical mass of people. “Critical mass” refers here to the number of people sharing the belief system needed for additional members to adopt a belief system. Although this may not be any single number since the utility functions controlling the adoption are not uniformly distributed among individuals; there is an underlying assumption that belief systems are social by nature. If not enough people adopt a belief system, it is not remarkable enough to drive human action at a meaningful scale.
5. Understanding. Belief systems are intangible and unobservable by any direct means, but they are “real” is social sense of the word. They are social objects or constructs that can be scrutinized by using proxies that reflect their existence. The best proxy for this purpose is language. Thus, belief systems can be understood by analyzing language. Language reveals how people think. The use of language (e.g., professional slang) reveals underlying shared assumptions of members adhering to a belief system. An objective examinator would be able to observe and record the members’ use of language, and construct a map of the key concepts and vocabulary, along with their interrelations and underlying assumptions. Through this proceduce, any belief system could be dissected to its fundamental constituents, after which the merits and potential dischords (e.g., biases) could be objectively discussed.
For example, startup enthusiasts talk about “customer development” and “going out of building” as new, revolutionary way of replacing market research, whereas marketing researchers might consider little novelty in these concepts and actually be able to list those and many more market research techniques that would potentially yield a better outcome.
6. Performance. By objective means, a certain belief system might not be superior to another either to be adopted or to perform better. In practice, a belief system can yield high performance rewards either due to 1) additional efficiency in communication, 2) randomity of it working better than other competing solutions, or 3) its heuristical properties that e.g. enhance decision-making speed and/or accuracy. Therefore, beliefs systems might not need to be theoretically optimal solutions to yield a practically useful outcome.
7. Changing belief system. Moreover, belief systems are often unconcious. Consider the capitalistic belief system, or socialist belief system. Both drive the thinking of individuals to an enormous extent. Once a belief system is adopted, it is difficult to learn away. Getting rid of a belief system requires considerable cognitive effort, a sort of re-programming. An individual needs to be aware of the properties and assumptions of his belief system, and then want to change them e.g. by for looking counter-evidence. It is a psychological process equivalent to learning or “unlearning”.
8. Conclusion. People operate based on belief systems. Belief systems can be understood by analyzing language. Language reveals how people think. The use of language (e.g., professional slang) reveals underlying shared assumptions of a belief system. Belief systems produce efficiency gains for communication but simultaneously hinder consideration of possibly better alternatives. A belief system needs to be simple enough to be useful, people readily absorb it and do not question the assumptions thereafter. Changing belief systems is possible but requires active effort for a period of time.
Katz, M. L., & Shapiro, C. (1985). Network Externalities, Competition, and Compatibility. The American Economic Review, 75(3), 424–440.
Simon, H. A. (1956). Rational choice and the structure of the environment. Psychological Review, 63(2), 129–38.
March 30, 2017
Two brands colliding.
Hm, I’m thinking (therefore, I am a digital marketer). The classical advertising question has been:
How to measure the impact of advertising on a brand?
And then the answer has been “oh, you can’t”, or “it’s difficult”, or something along those lines. But, they say, it is there! The marketers’ argument for poor direct performance has traditionally been that there is a lift in brand awareness or attitude which are sometimes measured by conducting cumbersome surveys.
But actually, aren’t the aforementioned attributes just predictors of purchase? I mean, they should result in higher probability of purchase, right? Given that people know the brand and like the brand, they are more likely to purchase it.
If so, the impact metric *is* indeed always sales — it’s only a question of choosing the period of examination. If all advertising impacts lead to sales, then sales is the metric even when we talk of brand advertising.
According to the previous logic, it would seem measuring advertising impact by sales is always correct, but because of carryover effects (latent effects) the problem can be reformulated into:
What time period should we use to measure advertising impact?
And forget about measurement of brand impact. It’s not a question of impact on “soft” issues but impact on revenue. The influence mechanism itself might be soft, but it always needs to materialize as hard, cold cash. The more tricky questions are determining the correct examination period for campaigns which requires fitting it to the length of purchase process, and keeping the analytics trail alive for at least that time period.
Conclusions and discussion
If carryover effects occur, how can we determine the correct time frame for drawing conclusions on advertising impact?
…I have to say, though, that measuring brand sentiment can’t be wrong. It can help understand why people like/dislike the brand, and therefore provide improvement ideas and a description of perceived brand attributes, information which is helpful for both product development and marketing.
But the ultimate metric for assessing advertising impact should always be sales.
March 30, 2017
This Inc article states a very big danger for Facebook: http://www.inc.com/jeff-bercovici/facebook-sharing-crisis.html
It is widely established in platform theory that reaching a negative tipping point can destroy a platform. Negative tipping is essentially the reverse of positive tipping — instead of gaining momentum, the platforms starts quickly losing it.
There are two dimensions I want to look at in this post.
First, what I call “the curse of likes“. Essentially, Facebook has made it too easy to like pages and befriend people; as a result, they are unable to manage people’s newsfeeds in the best way in terms of engagement. There is too much clutter, leaving important social information out, and the “friend” network is too wide for the intimacy required to share personal things. The former reduces engagement rate, the latter results in unwillingness to share personal information.
Second, if people are sharing less about themselves, the platform has it more difficult to show them relevant ads. The success of Facebook as a business relies on its revenue model which is advertising. Both of the aforementioned risks are negative for advertising outcomes. If relevance decreases, a) user experience (negative effects of ads) and b) ad performance decrease as well, resulting in advertisers reducing their ad spend or, in worst-case scenario, them moving on to other platforms.
To counter these effects, Facebook can resort to a few strategies:
Overall, Facebook as a platform will not be eternal. But I think the company is well aware of this, since their strategy is to constantly buy out rivals. The platform idea persists although individual platforms may perish.
March 30, 2017
This post explains the use of NVivo software package for analysis of qualitative data. It focuses on four aspects:
First, coding. This is simply giving names to phenomena observed in the material. It’s a process of abstraction and conceptualization, i.e. making the rich qualitative material more easily approachable by reducing its complexity into simple and descriptive codes which can be compared to and associated with one other at the later stages of the analysis.
(In the picture, the highlighted areas are coded by right-clicking them and giving them a descriptive label which relates to a phenomenon of interest.)
You can think of the codes shaping up in two ways: a) from previous literature or b) emerging as important points in the material based on researcher’s judgment. (You can think of this from the perspective of deductive/inductive emphasis.) Either way, they’re associated with one’s research questions — the material always needs to be analyzed in the light of one’s research questions so that the results of the analysis remain relevant for the study’s purpose. Oftentimes, the first step of reading and coding of all material is referred to open coding (e.g., Strauss & Corbin, 2008).
Second, categorization. This is simply bundling the codes with one another and placing them in a hierarchical “structure”.
(In the picture, you can see codes being formulated as “main themes” and “sub-themes”, i.e. categories that contain other categories.)
The structure should follow the operationalization of the study — this usually comes naturally because the material is closely linked to interview questions which then again are linked to research questions (which are linked to the purpose of the study to form a full circle from analysis to study purpose). For example, in the above picture the categories, or themes, relate to challenges and opportunities of multichannel commerce – the topic under scrutiny.
Third, relationships. This is important – while reading the material, the researcher should form tentative relationships in his or her head. For example, “I see, x seems to be associated with y“. These are the “heureka” moments one gets while immersed in the analysis.
(In the picture, you can see several tentative relationships emerging from the analysis. A portion of them will be chosen for further validation/falsification, and potentially reported as outcomes of the study.)
The relationships can be coded instantly as you come across them — the beauty of NVivo is that you can code evidence (direct citations) into the relationships, and then later when you click the relationship open, you can find all associated evidence (the same applies for all codes and categories). So, as you analyze the material further you can add confirming and contrary evidence to the previously thought of relationships while a keeping a “trail of thought” useful for reporting of the results. It is important to understand that at this point of the analysis the discovered relationships are so-called interim findings rather than final conclusions.
Now, qualitative research can result in several outcomes in terms of reporting, one being propositions. Propositions are conclusions of qualitative analysis; they are in a way tentative suggestions of general relationships, and can be formulated into hypothesis for quantitative testing. However, the propositions can be “validated” or turned more robust by qualitative comparison as well. This is done through an iterative process of a) reading the material repeatedly and trying to find both confirmatory and falsifying evidence for the interim propositions, and 2) collecting more research data especially focused on learning more about the tentative propositions (in Grounded Theory, this is referred to as theoretical sampling, see Glaser and Strauss, 1967). When you have done the process of comparative analysis and theoretical sampling, you can have more confidence (not in statistical but analytical sense) in your propositions.
Fourth, comparison of background variables. It took me a long time to learn the power of comparing background information – but the potential and importance is really high, especially when the qualitative analyst wants to move beyond description to deeper understanding. I believe it was Milton Friedman who said the goal of research should be to find “hidden constructs” of reality. In quantitative studies this is done by a) identifying latent variables and 2) finding statistical relationship between variables. In qualitative studies, we do not speak of relationships in statistical sense but rather of “associations” which can be of many types (some examples below).
(The picture depicts different types of associations named by the researcher, including i.a. “challenge”, “solution”, “opportunity”)
Anyway, there is absolutely no reason why we should not pursue discovery of hidden realities also in qualitative studies. In NVivo, this can be done via classifications and attributes. First, decide which are the important background information you want to compare. Then, include that in your interview questions. Once you have the material, code all interviews (each representing an informant) as nodes – to these nodes, you will assign a classification schema with the chosen background information (attributes).
(The picture includes a comparison matrix of small and large firm representatives views on customer service challenges — it can easily be expanded to include other dimensions as per the analysis framework.)
For example, consider you would like to compare the views of small and large firms on a specific multi-channel challenge, say customer service. You create a classification schema and give it the attribute of “size” with two potential values, small and large. Then, you’d run a matrix code query including small and large as rows and customer service as a column. Here, you can see the number of occurrences and more importantly, you can click them “open” to see all the associated evidence. You’re still tied to researcher’s judgment or “interpretativism” when comparing the answers, but at least this way you can conduct comparisons more systematically and in accordance with your analytical framework. It also helps you to discover patterns – for example, we could find that large firms tend to emphasize different challenges than smaller firms.
Finally, I’d say the fifth important aspect of qualitative analysis is visualization of the results, usually in the form of a model or framework. Unfortunately this is where NVivo fails hard.
(Unfortunately, it is important to draw a moderating line from the third variable to the relationship of the two other variables in NVivo.)
For example, you can’t draw “moderating” relationships, and also the variable names are cut short in language such as Finnish which has long words (I’ve reported these shortcomings to QSR which is the maker of NVivo). Granted, moderation is usually understood as a property of quantitative studies, but there’s no reason why a qualitative framework or model shouldn’t also incorporate them in a conceptual model (which could be later tested by structural equation modeling, for example). So, until these problems are fixed, I’d recommend sticking to other tools for visualization, such as Microsoft PowerPoint.
Corbin, J., & Strauss, A. (2008). Basics of Qualitative Research: Techniques and Procedures for Developing Grounded Theory. SAGE.
Glaser, B. G., & Strauss, A. L. (1967). The Discovery of Grounded Theory: Strategies for Qualitative Research. Transaction Publishers.
The author holds a PhD in marketing, and is teaching and conducting research at the Turku School of Economics. His topics of interest include digital marketing, startup companies and platforms.
March 29, 2017
In the Hindi scripture there is a famous passage in which the god Vishnu describes himself as death; to Westerners this is mostly known through Oppenheimer’s citation:
“Now, I am become Death, the destroyer of worlds.”
But, there is another god in Hinduism, Brahma, that is the creator of the universe.
Just like these two gods, startups are of dualistic nature. In particular, they are both job creators and job destroyers. One one hand they create new jobs and job types. On the other hand, they destroy existing jobs.
This dualistic nature is often ignored when evaluating the impact of startups on the society, although it’s definitely in the core of the Schumpeterian theory of innovation. What really matters for the society is the balance — how fast are new companies creating jobs vs. how fast they are destroying it.
I haven’t seen a single quantification of this effect, so it would definitely merit research. Theoretically, it can be called something like SIR, or startup impact ratio which would be jobs produced / jobs destroyed.
SIR = jobs produced / jobs destroyed
As long as the ratio is more than 1, the startups’ impact on the job market (and therefore indirectly on the society) is positive. In turn, if it’s below 1, “robots are taking our jobs”. Or, rather, if it’s above one, Brahma is winning while below one means Vishnu is dominating.
Dr. Joni Salminen holds a PhD in marketing from the Turku School of Economics. His research interests relate to startups, platforms, and digital marketing.
Contact email: [email protected]
March 29, 2017
By definition, a dilemma is a trade-off situation in which there are two choices, each leading to a negative outcome.
A general solution, then, is to weigh the outcomes and compare them against one another.
choice A: -1
choice B: -2
In this example, choice A has smaller negative effect, so we’d pick that one.
However, there are complications.
Consider the above would in fact be the short-term outcomes, but there are also long-term outcomes. For example
choice A: -1, -3
choice B: -2, -1
This leads us into payoff functions, so that the outcomes (payoffs) consist of many variables. In the example, the long-term negative effects outweigh the short-term effects, and we would change our choice to B.
However, the choice can also be arbitrary, meaning that neither choice dominates. In game theory terms, there is no dominant strategy.
This would be the case when
choice A: -1, -2
choice B: -2, -1
As you can see, it doesn’t matter which choice we take since each gives a negative outcome of equal size. There is an exception to this rule, namely when the player has a preference between short- and long-term outcomes. For example, if he wants to minimize long-term damage, he would pick B, and vice versa.
In decision-making situations, it’s common to make lists of + and -, i.e. listing positive and negative sides. by assigning a numerical value to them, you can calculate the sum and assign preference among choices. in other words, it becomes easier to make tough decisions.
I’m into digital marketing, startups, and platforms. Download my dissertation on startup dilemmas: http://goo.gl/QRc11f
March 29, 2017
this is work in progress – I’ll keep updating this list as new moments of “heureka” hit me.
Want to add something? Please post it in the comment section!
March 29, 2017
Many workers are concerned about “robotization” and “automatization” taking away their jobs. Also the media has been writing actively about this topic lately, as can be seen in publications such as New York Times and Forbes.
Although there is undoubtedly some dramatization in the scenarios created by the media, it is true that the trend of automatization took away manual jobs throughout the 20th century and has continued – perhaps even accelerated – in the 21st century.
Currently the jobs taken away by machines are manual labor, but what happens if machines take away knowledge labor as well? I think it’s important to consider this scenario, as most focus has been on the manual jobs, whereas the future disruption is more likely to take place in knowledge jobs.
This article discusses what’s next – in particular from the perspective of artificial intelligence (A.I.). I’ve been developing a theory about this topic for a while now. (It’s still unfinished, so I apologize the fuzziness of thought…)
My theory on development of job markets relies on two key assumptions:
The idea here is that while it is relatively easy to replace a job taken away by simple machines (sewing machines still need people to operate them), it is much harder to replace jobs taken away by complex machines (such as an A.I.) providing higher productivity. Consequently, less people are needed to perform the same tasks.
By “development cycles”, I refer to the drastic shift in job market productivity, i.e.
craftmanship –> industrial revolution –> information revolution –> A.I. revolution
Another assumption is that the labor skills follow the Gaussian curve. This means most people are best suited for manual jobs, while information economy requires skills that are at the upper end of that curve (the smartest and brightest).
In other words, the average worker will find it more and more difficult to add value in the job market, due to sophistication of the systems (a lot more learning is needed to add value than in manual jobs where the training requires a couple of days). Even currently, the majority of global workers best fit to manual labor rather than information economy jobs, and so some economies are at a major disadvantage (consider Greece vs. Germany).
Consistent to the previous definition, we can see the job market including two types of workers:
The former create the systems as their job, whereas the latter operate them as their job. For example, in the sphere of online advertising, Google’s engineers create the AdWords search-engine advertising platform, which is then used by online marketers doing campaigns for their clients. At the current information economy, the best situation is for workers who are able to create systems – i.e. their value-added is the greatest. With an A.I, however, both jobs can be overtaken by machine intelligence. This is the major threat to knowledge workers.
The replacement takes place due to what I call the errare humanum est -effect (disadvantage of humans vis-à-vis machines), according to which a machine is always superior to job tasks compared to human which is an erratic being controlled by biological constraints (e.g., need for food and sleep). Consequently, even the brightest humans will still lose to an A.I.
Consider these examples:
(Some of these figures are a bit outdated, but in general they serve to support my argument.)
Therefore, the ratio of workers vs. customers is much lower than in previous transitions. To build a car for one customer, you need tens of manufacturing workers. To serve customers in a super-market, the ratio needs to be something like 1:20 (otherwise queues become too long). But when the ratio is 1:1,000,000, not many people are needed to provide a service for the whole market.
As can be seen, the mobile application industry which has been touted as a source of new employment does indeed create new jobs , but it doesn’t create them for masses. This is because not many people are needed to succeed in this business environment.
Further disintermediation takes place when platforms talk to each other, forming super-ecosystems. Currently, this takes place though an API logic (application programming interface) which is a “dumb” logic, doing only prescribed tasks, but an A.I. would dramatically change the landscape by introducing creative logic in API-based applications.
Many professional services are on the line. Here are some I can think of.
1. Marketing managers
An A.I. can allocate budget and optimize campaigns far more efficiently than erroneous humans. The step from Google AdWords and Facebook Ads to automated marketing solutions is not that big – at the moment, the major advantage of humans is creativity, but the definition of an A.I. in this paper assumes creative functions.
An A.I. can recall all laws, find precedent cases instantly and give correct judgments. I recently had a discussion with one of my developer friends – he was particularly interested in applying A.I. into the law system – currently it’s too big for a human to comprehend, as there are thousands of laws, some of which contradict one another. An A.I. can quickly find contradicting laws and give all alternative interpretations. What is currently the human advantage is a sense of moral (right and wrong) which can be hard to replicate with an A.I.
An A.I. makes faster and more accurate diagnoses; a robot performs surgical operations without flaw. I would say many standard diagnoses by human doctors could be replaced by A.I. measuring the symptoms. There have been several cases of incorrect diagnoses due to hurry and the human error factor – as noted previously, an A.I. is immune to these limitations. The major human advantage is sympathy, although some doctors are missing even this.
4. Software developers
Even developers face extinction; upon learning the syntax, an A.I. will improve itself better than humans do. This would lead into exponentially accelerating increase of intellect, something commonly depicted in the A.I. development scenarios.
Basically, all knowledge professions if accessible to A.I. will be disrupted.
Actually, the only jobs left would be manual jobs – unless robots take them as well (there are some economic considerations against this scenario). I’m talking about low-level manual jobs – transportation, cleaning, maintenance, construction, etc. These require more physical material – due to aforementioned supply and demand dynamics, it may be that people are cheaper to “build” than robots, and therefore can still assume simple jobs.
At the other extreme, there are experience services offered by people to other people – massage, entertainment. These can remain based on the previous logic.
I can think of a couple of ways.
First, learn coding – i.e. talking to machines. people who understand their logic are in the position to add value — they have an access to the society of the future, whereas those who are unable to use systems get disadvantaged.
The best strategy for a worker in this environment is continuous learning and re-education. From the schooling system, this requires a complete re-shift in thinking – currently most universities are far behind in teaching practical skills. I notice this every day in my job as a university teacher – higher education must catch up, or it will completely lose its value.
Currently higher education is shielded by governments through official diplomas appreciated by recruiters, but true skills trump such an advantage in the long run. Already at this moment I’m advising my students to learn from MOOCs (massive open online courses) rather than relying on the education we give in my institution.
At a global scale, societies are currently facing two contrasting mega-trends:
It is not hard to see these are contrasting: less people are needed for the same produce, whereas more people are born, and thus need jobs. The increase of people is exponential, while the increase in productivity comes, according to my theory, in large shifts. A large shift is bad because before it takes place, everything seems normal. (It’s like a tsunami approaching – no way to know before it hits you.)
I can think of a couple of ways:
By adopting a Marxist approach, we can see there are two groups who are best off in this new world order:
Others, as argued previously, are at a disadvantage. The phenomenon is much similar to the concept of “digital divide” which can refer to 1) the difference of citizens from developed and developing countries’ access to technologies, or 2) the ability of the elderly vs. the younger to use modern technology (the latter have, for example, worse opportunities in high-tech job markets).
There are some relaxations to the arguments I’ve made. First, we need to consider that the increase of time people have as well as the general population increase create demand for services relating experiences and entertainment per se; yet, there needs to be consideration of re-distribution of wealth, as people who are unable to work need to consume to provide work for others (in other words, the service economy needs special support and encouragement from government vis-à-vis machine labor).
While it is a precious goal that everyone contribute in the society through work, the future may require a re-check on this protestant work ethic if indeed the supply of work drastically decreases. the major reason, in my opinion, behind the failure of policies reducing work hours such as the 35-hour work-week in France is that other countries besides these pioneers are not adopting them, and so they gain a comparative advantage in the global market. We are yet not in the stage where supply of labor is dramatically reduced at a global scale, but according to my theory we are getting there.
Secondly, a major relaxation, indeed, is that the systems can be usable by people who lack the understanding of their technical finesse. This method is already widely applied – very few understand the operating principles of the Internet, and yet can use it without difficulties. Even more complex professional systems, like Google AdWords, can be used without detailed understanding of the Google’s algorithm or Vickrey second-price sealed auctions.
So, dumbing things down is one way to go. The problem with this approach in the A.I. context is that when the system is smart enough to use itself, there is no need to dumb down – i.e., having humans use it would be a non-optimal use of resources. Already we can see this in some bidding algorithms in online advertising – the system optimizes better than people. At the moment we online marketers can add value through copywriting and other creative ways, but the upcoming A.I. would take away this advantage from us.
It is natural state of job markets that most workers are skilled only for manual labor or very simple machine work; if these jobs are lost, new way of organizing society is needed. Rather than fighting the change, societies should approach it objectively (which is probably one of the hardest things for human psychology).
My recommendations for the policy makers are as follows:
Because the problem of reducing job demand is not acute, these changes are unlikely to take place until there is no other choice (which is, by the way, the case for most political decision making).
Up to which point can the human labor be replaced? I call it the point of zero human when no humans are needed to produce an equal or larger output than what is being produced at an earlier point in time. The fortune of humans is that we are all the time producing more – if the production level was at the stage of 18th century, we would already be in the point of zero human. Therefore, job markets are not developing in a predictable way towards point of zero human, but it may nevertheless be a stochastic outcome of the current development rate of technology. Ultimately, time will tell. We are living exciting times.
March 29, 2017
The return on investment (ROI) of academic publishing is absolutely terrible.
Think of it – thousands of hours spent correcting formatting, spellings, rephrasing, and so on. All this after the actual insight of the research has been accomplished. In all seriousness, 10% of time spent doing research and 90% writing and rewriting cannot be thought of anything else but waste.
The inefficiency of the current way of doing it – as in combining doing research and writing about it under the same name of “doing research” – is horrible waste of intelligence and human resources. It inflates the cost of doing research, and also makes scientific progress slower than if 90% was spent on research and 10% on writing.
Some might say it’s a perverse outcome of letting staff go – nowadays even professors have to do everything by themselves because there is so few assistants and administrators. Why is this perverse? Because at the same time more people need work. It’s also perverse, or paradoxical, because letting the help go is done to increase efficiency but in the end it actually decreases efficiency as the research staff shifts their use of time from doing research to fixing spelling errors. There is a large misunderstanding that letting people go would lead to better efficiency – it may save costs but exactly at the cost of efficiency.
The thought for this article came to mind when me and my colleague received yet again some minor edit requests for an article to be published in a book – the book material was ready already last year, but all these people are working to fix small minor details that add zero substance value. What a waste!
And I’m not alone in this situation; most if not all academics face the same problem.
Two solutions readily come to mind:
The latter one is much better, as the first option misses the importance of interpreting the results and theorizing from them (the whole point of doing research).
Efficiency, such as ROI of research, should be defined as learning more about the world. This will never be accomplished by writing reports but going out to the world. At the same time, I don’t mean to undermine basic research – the ROI of research is not the same as its immediate usefulness, let alone its immediate economic potential. ROI in my argument simply refers to the ratio of doing research vs. writing about it, not the actual quality of the outcome.
The author works as a university teacher in the Turku School of Economics