Tuesday, October 14, 2025

Emotion and Identity: Silent Architects of Attention, Thought, and Action

When people are asked why they made a particular decision, they often describe it in terms of logic and reasoning. Yet research across psychology, neuroscience, and sociology consistently shows that much of what guides human behavior lies outside deliberate rationality. Two of the most powerful but subtle forces are emotion and identity. These factors determine what we notice in our environment, how we interpret it, and how we respond. Even more profoundly, they shape the situations in which we find ourselves and often keep us trapped in them, sometimes long after reason would suggest leaving.

 

The human brain processes far more sensory information than it can ever consciously attend to. Psychologists often describe attention as a spotlight: it illuminates a small portion of the environment while leaving the rest in shadow. What determines where this spotlight lands is often emotional salience. Research shows that emotionally charged stimuli—such as threatening faces, symbols of danger, or even images linked with reward—are noticed more quickly and remembered more vividly than neutral stimuli (Pessoa, 2009).

 

This tendency has clear evolutionary roots. Early humans who rapidly noticed the snake in the grass or the angry glare of a rival were more likely to survive than those who overlooked such cues. But in modern life, this same attentional bias means that our emotional states can dramatically skew what we perceive. Someone feeling anxious may notice only the risks in a situation, while someone feeling joyful may see possibilities that others overlook. In this way, emotion is not just a passing experience but a force that shapes perception at its most basic level.

 

If emotion determines what feels urgent, identity determines what feels relevant. Identity is the collection of roles, values, and group memberships through which people define themselves. Social identity theory demonstrates that individuals pay heightened attention to information related to their in-groups, because such cues are tied to self-esteem and belonging (Tajfel & Turner, 1986). Similarly, self-schema research shows that people are more likely to notice and remember information that is consistent with their self-concept (Markus, 1977).

 

For example, someone who strongly identifies as a parent will quickly notice environmental cues related to children’s safety, while someone who defines themselves as a professional athlete may immediately spot opportunities for competition or training. Identity, in this way, organizes attention around the themes that make life feel coherent and meaningful. But it can also narrow focus, making people blind to information outside their roles.

 

Once attention is captured, emotion continues to shape how people think. Emotions act as cognitive frames, influencing interpretation and judgment. Research shows that anger often leads to overconfidence and polarized thinking, fear promotes risk-aversion, and sadness encourages deeper reflection and more systematic analysis (Lerner, Li, Valdesolo, & Kassam, 2015). In short, emotions bias not only what we notice but also how we reason about it.

 

Consider the example of a political debate. A viewer who feels anger may quickly categorize one side as entirely right and the other as entirely wrong. Another viewer, experiencing sadness over societal problems, might adopt a more nuanced perspective and weigh competing arguments carefully. Thus, emotions operate like mental filters that tilt the balance of thought processes, often outside awareness.

 

If emotion frames cognition in terms of feeling, identity frames it in terms of meaning. People are motivated to interpret information in ways that protect and affirm their identities, a process psychologists call motivated reasoning (Kunda, 1990). For instance, partisans often interpret ambiguous political events in ways that favor their party’s position. Similarly, religious or cultural identities can influence how individuals make sense of moral dilemmas or scientific evidence.

 

This identity-based filtering gives life coherence but also introduces bias. When people’s sense of self is tightly bound to a group or ideology, they may discount or reject information that threatens that identity. This dynamic helps explain why debates about politics, religion, or social values often feel intractable: they are not merely exchanges of evidence but defenses of selfhood.

 

EMOTION CONTINUED IN NEXT BLOG POSTING

Saturday, September 27, 2025

Kimmel, Kirk, and Us

 As almost everyone knows by now, Jimmy Kimmel had been “indefinitely suspended” by ABC on September 17, 2025 following his comments about the fatal shooting of conservative commentator Charlie Kirk.  A precise quote of his primary false offensive comment is, “We hit some new lows over the weekend with the MAGA gang desperately trying to characterize this kid who murdered Charlie Kirk as anything other than one of them and doing everything they can to score political points from it.”  

Let’s deconstruct Jimmy Kimmel’s assassin-related comments and then educate him.

First, Kimmel referred to Tyler Robinson as a “kid” despite his being over 22 years old; so, Kimmel needs to learn the following facts:

  • Primary elections: 21 states and Washington, D.C. allow 17-year-olds who will turn 18 by the general election to vote in the preceding primary election.
  • Local elections: Some towns and cities allow citizens younger than 18 to vote in local elections. Examples include several cities in Maryland where the voting age has been lowered to 16 for municipal contests
  • Voter preregistration: Most states, along with Washington, D.C., allow young people to preregister to vote before they are 18. The preregistration age varies by state, but can be as young as 16.
  • An American can join the armed forces without parental approval at age 18. If a person is 17 years old, they need the written consent of a parent or legal guardian to enlist. 
  • Approximately 61% of the Americans killed during the Vietnam War were 21 years old or younger.

Accordingly, Tyler Robinson was no kid, despite Jimmy Kimmel’s desire to find an excuse for the assassin.

Second, Robinson was in no way MAGA.  In fact, he was virulently, hatefully anti- MAGA. Moreover, Robinson was totally, delusionally opposed to democracy and free speech.  For instance, he justified murdering Charlie Kirk by saying, "There is too much evil and the guy [Charlie Kirk] spreads too much hate." So, an evil, hateful assassin projects his evil, hateful personality characteristics onto his target. And that targeted person was a staunch advocate for democracy and free speech.

Now let’s return to Jimmy Kimmel. On September 23rd, a mere six days after being put on indefinite suspension, he was almost fully back on the air.  And by September 26, he was fully back. Anyone with two intact cerebral hemispheres is not surprised to know that Kimmel’s television ratings profited enormously from his hateful speech.  Here is a chart showing his audience ratings before and after Kimmel’s slandering Kirk:

Comparative Ratings Table: "Jimmy Kimmel Live!" Before and After Controversy

Period

Total Viewers

18-49 year-olds

       Notes

1st qtr 2025

1.77 million

0.48 rating

       Pre-controversy baseline

8/1/25

1.1 million

0.35 rating

       Summer low

9/15/25

1.1 million

0.13 rating

       Day of controversial monologue

9/23/25

6.26 million

0.87 rating

Reinstated with 4X baseline increase

 

That 400% audience rating increase over baseline is precisely what rewards and keeps the influencer hate going. But perhaps you would argue that free speech is precisely what Charlie Kirk was advocating and that is true.  You,  I, and the people next door should be able to say whatever we want, because what we say will not promote widespread violence or severe retribution. We simply don’t have the platform to distribute our biases across the nation. My professional opinion—for what it’s worth—is described below in a more academic-like style.

The Power of the Microphone: Free Speech in the Age of Influence

In democratic societies, free speech is a cornerstone of liberty—a right enshrined in constitutions, protected by courts, and celebrated in public discourse. But as the digital age has redefined who holds a microphone, the consequences of speech have grown exponentially. There’s a critical difference between a private citizen expressing an opinion and a public figure with millions of followers making irresponsible, derogatory, or violent political statements.

Influence Amplifies Impact

A private citizen might vent frustrations at a dinner table or post a controversial opinion online, reaching a handful of people. But when someone with an enormous platform—be it a celebrity, politician, or elite influencer of any kind—uses their voice to spread inflammatory rhetoric, the stakes change. Their words can ripple across society, shaping public sentiment, fueling division, and even inciting violence.

Free Speech vs. Public Safety

The First Amendment protects speech from government censorship—but it doesn’t shield speakers from accountability. Courts have long held that speech inciting imminent lawless action is not protected (Brandenburg v. Ohio, 1969). The challenge today is that “imminence” is harder to define when viral posts can reach millions in seconds, and when coded language or dog whistles can mobilize groups without explicit calls to violence.

Social media companies have grappled with this dilemma. Platforms like Twitter (now X), Facebook, and YouTube have suspended or banned accounts of high-profile individuals for violating policies on hate speech and incitement. These decisions often spark debates about censorship, bias, and the boundaries of free expression.

Responsibility Comes with Reach

With great reach comes great responsibility. Public figures—especially those in politics or media—must recognize that their words carry weight. A single tweet or soundbite can validate extremist views, undermine democratic institutions, or provoke unrest. The difference between a private citizen and a public figure isn’t just scale—it’s influence. And influence, when wielded recklessly, can be dangerous.

Navigating the Future

As society continues to wrestle with the balance between free speech and public safety, one principle remains clear: speech is not just a right—it’s a responsibility. The louder the microphone, the greater the duty to use it wisely.  Free speech promulgated and disseminated by biased, controlling elites is not free; it exacts profound costs by destroying democracy, safety, and civility. Jimmie Kimmel successfully parlayed punishment into profit.  He no doubt now has taught millions of others to do the same—a master class in how to divide and destroy America. Our children, China, Russia, North Korea, and Iran are watching and learning !

 

Tuesday, July 15, 2025

Why We Remember Our Teenage Years So Vividly

There’s something about the adolescent and early adult years causing them to be   preferentially emblazoned in our minds.  A “reminiscence spike” consistently appears when remote memories over the ages are plotted on a graph. This blogpost discusses that finding. And the next will illustrate very significant emotional differences in  teen and early adult cohort experiences from the decades of 1960 to 2010.

The Stories We Keep

Why do so many of our most vivid memories come from our teens and twenties? The rush of a first kiss, the feeling of driving alone for the first time, the concerts, the heartbreaks, the friendships that felt like they would last forever—these memories stick with us in a way that even more recent experiences often do not. This psychological phenomenon is known as the "reminiscence spike," and it refers to the tendency for older adults to recall a disproportionately large number of autobiographical memories from their adolescence and early adulthood—typically from about ages 10 to 30, with a peak around the late teens to early twenties.

But why do these years burn so brightly in the mind’s eye?

Researchers have been fascinated by this for decades. The reminiscence spike shows up reliably when people over the age of 40 are asked to recall the most important events of their lives, or when they are prompted with cues like "Tell me about a memorable experience associated with the word 'freedom.'" Time and again, people reach back to their younger years—even when their memory of other life periods fades.

There are several psychological theories that seek to explain this striking memory phenomenon.

1. The Cognitive Account: Novelty Breeds Memory

One of the most influential explanations is the cognitive account, which suggests that we remember this period so well because it's packed with firsts: first job, first love, first move away from home. According to this view, the brain is more likely to encode and retain novel or emotionally intense experiences, and adolescence is full of them.

Psychologist David Rubin and colleagues have argued that these “firsts” act as strong memory anchors because new experiences lead to deeper encoding, and the novelty of events in adolescence and early adulthood make them more memorable (Rubin, Wetzler, & Nebes, 1986).

2. The Identity-Formation Hypothesis: Memory Serves the Self

Another theory suggests that this upward spike in memory is tied to the process of identity formation. According to Erik Erikson’s stages of psychosocial development, adolescence and young adulthood are the key periods when individuals ask, “Who am I?” and “What do I want from life?”

This idea is supported by research showing that the events people remember from this time are often ones that shaped who they are: a life-changing teacher, a choice to pursue a career path, a rebellious phase, or a defining cultural moment. Conway and Pleydell-Pearce (2000) argue that autobiographical memory is organized around a “self-memory system,” and events that contribute to the construction of a coherent self are more likely to be remembered.

3. The Cultural Life Script Hypothesis: Society Writes Our Story

A third perspective focuses less on the individual and more on shared cultural expectations. This is known as the cultural life script hypothesis. According to this view, cultures provide a template—a sort of timeline—about when major life events are expected to occur (like falling in love, graduating, getting married, or having children). Because many of these events typically occur in adolescence and early adulthood, we remember them more vividly.

Berntsen and Rubin (2004) showed that when people are asked to recall the “typical life of a person,” most of the important events they mention happen during this same reminiscence period—whether or not they personally experienced them. This suggests that memory is partially structured by shared societal narratives.

4. Neurological and Biological Changes: Brain at Its Peak

Some researchers point to neurological development. During adolescence, the brain—particularly the hippocampus and prefrontal cortex, which are crucial for memory encoding—is both active and plastic. Hormonal changes and heightened emotions can also strengthen memory formation. This period might simply be when our brains are most efficient at forming long-lasting, emotionally rich memories (Ghetti & Bunge, 2012).

5. Emotional Intensity and Rehearsal

Finally, the emotions associated with adolescence may be stronger and more personally meaningful, and we tend to rehearse those memories more often—by telling stories, looking at old photos, or daydreaming. Emotionally charged memories, especially those that are frequently revisited, tend to be better consolidated and retained over time (Kensinger, 2009).

To conclude, the reminiscence spike is not just a quirk of memory—it’s a window into how we build our life stories. These adolescent and early adult memories serve as emotional landmarks, guiding our sense of self across the years. Whether it’s your first apartment, the song that played during your senior prom, or the rush of independence that came with your first road trip, these are the moments our minds cling to—not only because they were exciting, but because they helped define who we are.  As life continues, new memories  form, but the ones from that crucial period of becoming—they remain the most vivid chapters in the autobiography we carry in our minds.

Although psychology has completed dozens of reminiscence spike studies, I have yet to find research in one related area that is worth considering.  That is the fact that memories are gross abstractions of actual experience.  And those abstractions often include major distortions. Some of your reminiscences  and/or parts of them are patently false.  Moreover, what you remember is always influenced by how you are feeling at the moment of recall.  That notion of “state dependent memory” is  robust and important. So, if you are happy at the moment of reminiscence, you are more likely to recall fondly, and the opposite, if you are sad.

 

REFERENCES

Berntsen, D., & Rubin, D. C. (2004). Cultural life scripts structure recall from autobiographical memory. Memory & Cognition, 32(3), 427–442. https://doi.org/10.3758/BF03195836

Conway, M. A., & Pleydell-Pearce, C. W. (2000). The construction of autobiographical memories in the self-memory system. Psychological Review, 107(2), 261–288. https://doi.org/10.1037/0033-295X.107.2.261

Ghetti, S., & Bunge, S. A. (2012). Neural changes underlying the development of episodic memory during middle childhood. Developmental Cognitive Neuroscience, 2(4), 381–395. https://doi.org/10.1016/j.dcn.2012.05.002

Kensinger, E. A. (2009). Remembering the details: Effects of emotion. Emotion Review, 1(2), 99–113. https://doi.org/10.1177/1754073908100432

Rubin, D. C., Wetzler, S. E., & Nebes, R. D. (1986). Autobiographical memory across the adult lifespan. In D. C. Rubin (Ed.), Autobiographical memory (pp. 202–221). Cambridge University Press.

 


Tuesday, July 1, 2025

What You Say and Don't Say

There’s a curious power in silence. Not just in what isn’t spoken, but in what is deliberately withheld. Every conversation, every sentence, even the briefest exchange, is an act of editing. We choose our words carefully—or sometimes carelessly—but either way, we’re revealing a version of ourselves. At the same time, we’re concealing something else. That’s the quiet truth at the heart of communication: what you say is only half the story. 

Think of the last time you held your tongue. Maybe it was in the middle of an argument, when your pride ached to say something sharp, but your better judgment told you not to. Or maybe it was during a moment of vulnerability, when someone you cared about opened up—and instead of blurting out advice, you simply listened. In either case, your silence wasn’t empty. It was filled with meaning, restraint, perhaps even love.

Words carry weight, but so does their absence. We sometimes forget this in a culture that rewards volume, speed, and opinions broadcast into the void. Social media encourages us to speak instantly and incessantly, as if silence were an admission of ignorance or irrelevance. But in real life, choosing not to say something can be the strongest statement of all. It can be a sign of maturity, of empathy, of knowing that not every thought needs to be shared to be understood.

Of course, there are risks in silence, too. Not speaking up when something matters—when injustice unfolds in front of you, or when someone needs a defender—can feel like complicity. That’s the other side of the coin. Just as our silence can protect, it can also betray. The challenge is learning when to use it wisely.

Ultimately, the way we communicate is less about mastering language and more about mastering ourselves. It’s about knowing that every word you release into the world changes something, however small. And every word you keep tucked away does, too. So, the next time you’re about to speak—or hold back—ask yourself not just what you want to say, but whyBecause in the end, what you choose to say—and not say—becomes the voice of who you are.

I also want to underscore the power contained in single word choices.  That is, what we say—and don’t say—is shaped not only by dictionary content but also by tone, and more specifically, by the emotional charge of the words we choose. Language isn’t just a tool for conveying facts. It’s a vehicle for feeling, for stirring emotion in others, and for revealing what lies under the surface of our own thoughts.

A single word—carefully chosen or carelessly flung—can elevate or destroy, soothe or provoke. Take the difference between saying someone is “stubborn” versus “determined.” Technically, they describe a similar trait, but emotionally, they land in entirely different places. “Stubborn” feels harsh, rigid, even critical. “Determined” feels admirable and strong. The facts remain unchanged, but the emotional color shifts entirely depending on the word.

That’s why individual word choice matters so deeply in close relationships, in leadership, and even in casual conversation. Think about how different “I’m disappointed” feels compared to “I’m angry.” Or how “I understand” can feel vastly more comforting than “I get it.” Each word carries emotional weight—a kind of invisible gravity that can pull others in or push them away.  Politicians, poets, and advertisers all know this. They wield words not just to inform but to move—to trigger hope, fear, pride, shame, or urgency. And we do this, too, even when we’re unaware. Our word choices are emotional fingerprints, revealing our moods, biases, and intentions, even in subtle ways.

The beauty and burden of language is that every word carries baggage. And that baggage enters the room the moment we speak. So, as much as communication is about deciding what to say and what to leave unsaid, it's also about the emotional texture of how we say it. A gentle word can soften the hardest truth. A harsh word can shatter even a delicate silence. In the end, words are not just tools—they are instruments. And like any instrument, they can play music or make noise. The difference is in how consciously—and compassionately—we choose to use them. The obvious point of this blog post is: what you choose to say, not say, and the words you select all determine what you communicate conceptually and emotionally, what you enact relationship-wise, and what you disclose about yourself.

Wednesday, May 7, 2025

Museum Machinations

 When you walk into a science museum in Canada, you might expect to see the usual exhibits: dinosaurs, space exploration, maybe a section on the human body. But in many of these museums—especially in the last decade or so—you’ll also come across exhibits that highlight indigenous knowledge systems. They supposedly are included to “integrate indigenous science” alongside Western science.

Take the Canadian Museum of Nature in Ottawa or the Science North Centre in Sudbury as examples. These places now go beyond just displaying indigenous artifacts in a glass case with a label. Instead, they  tell fuller stories that indigenous peoples have told  for hundreds of years. And, importantly, the museums are calling it, “science”. Let’s say there’s an exhibit on animal migration or weather patterns. Western science might show satellite images or data from GPS collars on caribou. Right next to that, you might see a quote or video from an Inuit elder explaining how the animals' migration can be predicted by the thickness of sea ice or the behavior of birds. The display might even note that these traditional observations—passed orally over generations—have proven reliable and are now being used to complement Western research in areas like climate change.

Similarly, in botany exhibits, you'll see how indigenous knowledge of plant medicine is featured—not as folklore or superstition, but as a parallel system of empirical, research-based science. For example, the Haudenosaunee (Iroquois) people’s use of white cedar for respiratory issues is sometimes included in exhibits not just as a cultural note, but claimed to be reliable bioactive compounds.

The museums are moving away from terms like "myth" or "primitive beliefs." Instead, they use phrases like "indigenous knowledge systems," "traditional ecological knowledge," or "indigenous science." These shifts claim that while the methods and metaphors may differ from Western science, the goals—observation, prediction, and explanation of the natural world—are fundamentally scientific.

The rationalizing museum official assertion likely will be to describe their policy as an attempt to “decolonize science communication”. In other words, they are motivated to change the long-standing tendency to treat Western science as the “gold standard” for knowing the world. By incorporating “indigenous science “respectfully and on equal footing, they  seek to “broaden the public’s understanding of what science can be”.

Occasionally there are debates about whether some beliefs—such as spiritual interpretations of nature—fit into the category of science. But museums are increasingly comfortable with refusing to accept those arguments. Accordingly, museum visitors might see one exhibit showing  a geological explanation of how a mountain formed, and right next to it, an indigenous story that explains the mountain’s origin in cultural and spiritual terms. Whether intended or not, the juxtaposition implies that the “science” museum is not fully committed to the scientific method—that indigenous beliefs have equal standing.

In short, Canadian science museums are treating  indigenous knowledge not as something “other” or “less,” but as a legitimate, tested, and deeply rooted form of science.  In my opinion, this Canadian “science” approach is grossly flawed.  It is perfectly appropriate, even laudable, to respect and inquire about indigenous myths and traditions.  But equating them with science is both absurd and intellectually dishonest. Science is much more than an artifact and/or longstanding belief.  Above all, science is a continually recursive process in which ideas are proposed, tested, challenged, retested, and refined. 

I am not religious and firmly “believe” the basic theory of evolution. The belief is firmly rooted in knowing  that the theory has been tested in thousands of rigorously conducted scientific studies.  However, if compelling evidence is discovered that debunks evolution, I am ready, willing, and able to revise my evolution beliefs.  

Speaking of evolution and returning to the Canadian museum approach, I have a question:  Do their museums have Creationist explanations and displays in the evolution section?  To repeat, I am not religious and I do not believe in Creationism.  However, there is a kind of Creationism belief system known as "theistic evolution".  That perspective generally acknowledges the scientific validity of evolution while simultaneously suggesting that God initiated and guided the evolutionary process, possibly including the introduction of souls into humans.  Most notably, Francis Collins, a highly respected figure in both scientific and religious circles is a prominent proponent of theistic evolution.  He was a leading professional in the field of genetics and director of the National Institutes of Health (NIH). Collins  has written extensively on the compatibility of science and faith, advocating for a view that God created the universe and used the process of evolution to bring about life.

If you ask Canadian museum leaders why Creationism is not worthy of inclusion in their collections, the reply quite likely will be some thinly veiled version of “Creationism is pseudoscience advocated by ignorant, science-denying, right-wing zealots”.  As to the museum “indigenous science “ that they do include, I would ask three questions:  1. What is your definition of science?  2. Specifically  explain the criteria you use to evaluate the scientific bona fides  of each indigenous science exhibit.  3. Who is/are the scientist(s) responsible for the science that you do “advertise” and endorse?

This blog, then, suggests that we never should blindly accept someone else's definition of science. When we do, we allow them to make intimidating incriminations of "You're not following the science" whenever we disagree.  We always need to know what is the science presented, who has produced it, and what is its purpose.

Tuesday, March 11, 2025

My just released new book

 


https://www.amazon.com/s?k=peter+j+mccusker&crid=22ORY5JO14N23&sprefix=peter+j+mcc%2Caps%2C99&ref=nb_sb_ss_fb_1_11_p13n-conservative-preferred-department-reranking

 My newly released book:

In contemporary society, words can be as destructive as bombs. Weaponized Communication: Improvised Explosive Devices explores how information, propaganda, and psychological tactics function as volatile tools of conflict—detonating social and political chaos, just as IEDs wreak havoc on battlefields.

This book delves into the parallels between physical and informational warfare, examining how narratives, misinformation, and strategic messaging are deployed to manipulate perception, destabilize adversaries, and exert control. From insurgent propaganda to state-sponsored disinformation campaigns, Weaponized Communication analyzes how groups exploit media, technology, and public discourse to create psychological and ideological explosions.

Central to this book is that weaponized communications are often multimodal, involving listening-speaking, reading-writing, and visual reception-visual presentation. As such, integrated destructive information can be spread far and wide..

Social science studies and concepts are employed to unpack the mechanics of weaponized influence and deception in modern cultural conflicts. Whether in asymmetric warfare, cyber operations, or mass media manipulation, the weaponization of communication is shown as polluting the social fabric of western culture. 

Weaponized Communication: Improvised Explosive Devices offers a gripping and critical exploration of how words, like bombs, are engineered to disrupt, divide, and destroy. Forewarned is forearmed.


Saturday, March 1, 2025

It's in Your DNA ?

It’s in Your DNA.  How often have you heard that comment?  How often has it added anything relevant or substantial to the discussion?  For instance, some people use a simplistic DNA premise to explain why someone is gay, lazy, artistic, wealthy, or criminal. Obviously, DNA is necessary for every bodily and mental feature of every human being, but it rarely is both necessary and sufficient.

To make my point, I suggest considering the commonly stated scientific fact that humans and chimpanzees share about 98-99% of their DNA.  Why then are we so profoundly different from chimps?  I know of no chimp astronauts, surgeons, carpenters, or beauticians. Where is the 2 to 1% human-chimpanzee difference and what is its significance?

One most important explanation follows from the genotype-phenotype distinction.  In brief, the genotype is the genetic DNA code you inherit from your parents—a recipe that tells your body how to make proteins, which then determine things like your eye color, height, or even your risk for certain diseases. But just because a recipe exists doesn’t mean the final dish will turn out exactly as written. Your phenotype is what actually shows up—the final observable dish. 

Phenotypic traits result from the way your body expresses your genetic instructions, and depend on both your genes and your environment. For instance, your genotype might include genes for being tall, but  poor childhood nutrition can cause you to be shorter than your genetic potential. To produce their optimal effects, genes need to be turned on or off at specific times, for specific durations, and in specific settings.

Time for more monkey business.  Everyone is familiar with obvious human-chimpanzee phenotypic differences, such as differences in brain size, body shape, hairiness, tooth size, and facial muscles that give humans flatter faces and smaller jaws. The genetic differences are less familiar to most of us.  Some of the more important ones are: 

Single Nucleotide Changes
A large portion of the genetic differences are single-nucleotide polymorphisms (SNPs), which are small changes in individual DNA bases. These changes are scattered across the genome.
Gene Regulatory Differences
While most of the genes in humans and chimps are nearly identical, their expression patterns differ significantly. This means the same genes may be turned on or off at different times, in different tissues, or at different levels. Regulatory regions (like promoters and enhancers) show significant divergence, particularly in brain-related genes.
Insertions, Deletions, and Duplications
Humans and chimps have differences in copy number variations—genes or sections of DNA that are duplicated or deleted. Humans have a higher frequency of duplications in genes associated with brain development and immunity.
Chromosomal Rearrangements
Humans have 23 pairs of chromosomes, while chimps have 24. This difference is due to a fusion event in humans where two ancestral ape chromosomes combined to form human chromosome 2. Structural changes like inversions and translocations also contribute to differences.
Accelerated Regions in Humans
Certain regions of the genome, called Human Accelerated Regions (HARs), have undergone rapid evolution in humans. Many of these regions are associated with brain development, cognitive function, and limb formation.

WHAT DOES ALL THIS HAVE TO DO WITH YOU AND WITH WHAT IS IMPORTANT TO YOU?  

In the interest of your time, let’s take one very brief look at an extremely well researched, scientifically respected area of wide-spread genetic concern for many of us—Alzheimer’s Disease.  

A common gene associated with Alzheimer's disease is APOE (Apolipoprotein E), specifically the APOE ε4 variant.  There are three main versions of the APOE gene: ε2, ε3, and ε4. APOE ε4 is the strongest genetic risk factor for late-onset Alzheimer's (the most common form). However, having the gene does not guarantee that someone will develop the disease—it only increases the risk.  One copy of APOE ε4 (from one parent): Increases the risk by about 2-3 times compared to people without it. About 20-30% of people with one ε4 allele develop Alzheimer's. Two copies of APOE ε4 (one from each parent): Increases the risk by 8-12 times. Around 50-70% of people with two copies will develop Alzheimer's by age 85. However, people without APOE ε4 still have some risk, but it's significantly lower (about 10% lifetime risk).

Takeaways

So, Alzheimer’s disease susceptibility almost certainly is partially in our DNA. But just as sharing 98 or 99% of our DNA with chimpanzees allows for radical differences between the two species, having one or two copies of APOE ε4 does not doom us to Alzheimer’s disease.  We do not automatically go from an Alzheimer’s genotype to an Alzheimer's phenotype. As mentioned previously, genes need to be turned on or off at specific times, for specific durations, and in specific settings.

My primary point in this blog is merely to underscore that it is almost always incorrect, and potentially damaging, for you to presume that your concerns are exclusively due to uncontrollable genetics. Instead, look to what you can do to improve your behavior and environments, since they are much more amenable to your deliberate influence.  

Saturday, February 1, 2025

What Do You Meme?

Most people have only a superficial understanding of what "meme" means. [The alliteration is purposeful and has meaning to me. You, too, should understand soon.]  Those who use the word “meme’ are likely also to know “mimetic,” but may or may not connect the two terms. If you are reading this mindfully, you will see clearly that the second letter in the two targeted words are different -- e vs. i . That's because they  are not etymologically related.  Since both terms are frequently spoken and written, I thought it might be useful to consider them as a way to sensitize us all to what we are saying and hearing.

The words meme and mimetic do share a common conceptual root but, as noted, they have distinct etymologies and distinct meanings, as well.  When most people speak about a meme, they mean an idea, icon, object, or action that is current and popular. But that does not conform to the original definition. 

Dr. Richard Dawkins in the Selfish Gene introduced "meme" in 1976 as a “gene" analogue that referred to a unit of cultural transmissions similar to the bodily transmissions from a gene. To him, memes, like genes, needed to possess fidelity, (be a faithful copy) fecundity (spread widely) and longevity (have long-standing influence). Of course, those three criteria can be interpreted subjectively or objectively. Regardless of whether the subjective or objective standard is applied, however, most often when I hear or read someone referring to a meme, it does not satisfy the required three criteria. So, I feel that using the word meme often is unnecessary and/or meaningless. Perhaps the user is trying to impress rather than inform. Instead of incorrectly using the word meme, for instance, why not just say "popular"--a less ambiguous word that conveys the intended idea? 

Unlike meme, "mimetic" has a long history in classical philosophy wherein it indicated the process of imitation, especially imitation in behavior or art. The original use of this word was qualitatively different than today's colloquial meaning--something mimetic had to include imitation of reality, representation of reality in a manner that evokes clear recognition or understanding, and verisimilitude (truth likeness) that strives for a degree of believability or truthfulness. It may not be an exact replica of reality but should resemble it enough to be recognized as an authentic representation, an interpretive aspect of the world, and also be a creative transformation of the original.

My point in contrasting the original and colloquial uses of the terms meme and memetic certainly is not to suggest what is absolutely right and what is absolutely wrong conversationally.    Instead, I’m intent in underscoring that people are sometimes more interested in signaling their identity or tribal affiliation than they are in communicating clearly.  I’m sure you are well aware of Matt Walsh’s documentary, What is a Woman? (2022) and of the sex and gender-oriented Congressional hearings that illustrated the language absurdity and confusion attendant to a simple discussion of male-female differences.

There is nothing wrong in judiciously, infrequently  using  memes given certain conditions.  First, your use of the meme must be similar to the use of the meme of your interlocular. Second and most important, you are using the meme mindfully, not merely in rote imitation. Don’t present a prefabricated, popular meme that may not really reflect nuances of your personal idea.  Speak with as much precision as possible.  Then even if you use a meme, it will not be purely imitative, but closer to mimetic. For instance, rather than saying “at the end of the day," perhaps your idea would be better expressed with a phrase like “in the final analysis” or something even more creative and personal. Communicate primarily to transmit clear meaning rather than status or group affiliation.

Wednesday, January 1, 2025

Marines and Identity Signals

January 4, 2025 is the 60th anniversary of my arrival at Marine Corps Depot, Parris Island, South Carolina for the onset of a 4-year USMC active enlistment.  In thinking about that, I reflect upon the fact of my lingering Marine identity, and what it might suggest about identities in general.  Let’s start with my, admittedly bizarre, experience in Spain when vacationing there a few years ago. The incident occurred while I was waiting for a bus at a crowded stop.  A stranger with a heavy Scottish accent walked up to me and said, “You were a Marine.” In total shock, I asked, “How did you know?” to which he replied, “I worked as a civilian on a Marine base for 20 years. You all wear your hat the same way.”  I never had thought about hat-wearing as significant or even noteworthy.  At the time, mine was a plain baseball cap with no logo or words whatsoever.  My best guess is that the distinctive feature was do to the facts that I wore the hat tight to my head, pulled close to my eyebrows, and that I had squeezed the visor into a virtual semicircle. Only days later did I recall one and only one Marine-relevant fact—that my boot camp drill instructor did require us to measure two fingers above our eyes as the resting point for our “covers” [hats].  So, six decades later, I continue to unconsciously imitate behavior acquired in a long-gone context.    

We all share my Scotsman’s proclivity for interpreting identity based upon superficial appearances. Imagine walking into a bustling café, the hum of conversations filling the air, and taking a seat at a corner table. Without exchanging a single word with anyone, you begin to notice the people around you. A man in a tailored suit sips his espresso while glancing at his tablet. A young woman with brightly dyed blue hair and a collection of pins on her backpack sketches in a notebook. Across the room, a couple wearing matching workout gear share a smoothie after what looks like a rigorous morning run. Each person is silently telling a story about who they are, using the subtle language of nonverbal cues and personal presentation.

Clothing is often the first thing that catches your eye. It’s not just fabric and stitching—it’s a form of communication. The man in the suit might be signaling professionalism or the importance of his role at work, while the young woman’s vibrant style and unique accessories suggest creativity and an alignment with subcultures or causes she’s passionate about. Traditional garments, like a hijab or kente cloth, may express cultural heritage or a deep connection to personal identity. Each choice, whether intentional or subconscious, reveals something about the wearer’s world.

Next, you notice how people carry themselves. The man in the suit sits upright, exuding confidence, his movements deliberate and measured. The young woman leans over her sketchbook, her shoulders slightly hunched, perhaps signaling intense focus or introversion. Body language is a powerful communicator of emotions and personality. A relaxed stance with open gestures can suggest approachability, while crossed arms and a downward gaze may hint at discomfort or defensiveness.

The café patrons’ physical appearances also provide subtle clues. Age lines on a face might hint at life experience, while ethnic features or hair texture can reflect ancestral heritage. The couple in matching workout gear, with their toned physiques, likely share a commitment to health and fitness, which might even be a cornerstone of their relationship. 

Your attention might be drawn to the young woman’s arm, adorned with intricate tattoos. Each design seems deliberate, like pieces of a puzzle telling a deeply personal story. Tattoos and body modifications often reflect significant life events, cultural affiliations, or deeply held values. Whether it’s a small, minimalist design or a full sleeve of vivid imagery, these marks are windows into the wearer's personal journey.

If you observe further, even grooming habits become part of the narrative. The man’s neatly combed hair and polished shoes might reflect a lifestyle of precision and order, while the woman’s bold blue hair and artistic vibe could signify rebellion against convention or a creative spirit. Makeup choices—whether bold or understated—also serve as a canvas for self-expression.

As I often do, it’s time for me to emphasize the role of context in determining your observations. In a café like this, you might guess that the suited man is a busy professional taking a break from his workday. The sketching woman could be an art student or a freelancer finding inspiration. The couple in workout gear might have just come from a nearby gym, prioritizing health and shared routines. Where people are and what they’re doing often provides a frame for understanding their identity. 

Although silent cues are intriguing, it’s important to approach them with sensitivity and humility. While appearances offer clues, they don’t tell the whole story. People are complex, and their identities are multi-layered, often defying easy categorization. Behavior always is multi-determined. The man in the suit might be heading to a job interview rather than a board meeting. The young woman with the blue hair might be an introvert finding her voice through art. Identity assumptions, even well-intentioned ones, are always provisional hypotheses that need to be supported or refuted.


Sunday, December 1, 2024

Influenced by Influencers ?

The power of influence largely derives from the ways that it appeals to our cognitions and emotions. Professional influencers have clear, specific goals that they hope to achieve. They first strive to seize your attention. Then they seek to prepare you cognitively and emotionally to receive their message before they begin to deliver it. Once that is attained, they want to ensure that your frame of mind continues to remain as fully consistent with their goals as possible. In the language of psychology, that means the influencer offers “advance organizers” intended to create in you an enduring “mental set” that renders you continually susceptible to them. Further, because the influencer knows that his likelihood of success depends on how you align with him, he wants to make you feel that he values you. To do so, he might appeal to any of your thoughts and/or emotions. In the present chapter, we look at your mental functions that can be manipulated by those who want to exert their control, and how they might do so.    

Attention  

Most of our mental functions are steered by attention—attention that can be deliberate or incidental. “Attention capture” is a marketer’s first objective. They understand our primitive orienting response by which we turn our attention to a sudden and/or distinctive environmental stimulus. For instance, in the recent past, we all noticed that television commercials typically had broadcasted louder than the shows that they accompanied. After a citizen revolt, the United States government passed the Commercial Advertisement Loudness Mitigation (CALM) Act. However, on their website, (https://www.fcc.gov/consumers/guides/loud-commercials-tv), the current Federal Communications Commission (FCC) rationalizes that “Some commercials with louder and quieter moments may still seem ‘too loud’ to some viewers, but are still in compliance because average volume is the rule. The FCC does not monitor programming for loud commercials. We rely on people like you to let us know if they think there's a problem. If you have experienced what you believe is a violation of the rules regarding the loudness of commercial TV ads, you may file a complaint with the FCC at no cost.” 

Whether watching television or strolling the avenue then, we ultimately are responsible for deciding how we direct our deliberate attention. That is a first step in maintaining control of our decision making. Individuals who deliberately attend to every new stimulus, fad, or circulating meme will find plenty of reasons to be influenced. Incidental attention, obviously, is less amenable to conscious control, but not totally intractable; we manage it by confining ourselves, as much as possible, to non-coercive places, people, and information. The more we position ourselves in open settings populated by open people, the more we can maintain and express our own opinions, and rationally evaluate the external influences exerted upon us. When we are in closed settings populated by chauvinistic people, we are less prepared and less inclined to resist their influences ...     The preceding is a brief excerpt from my book --  Justifiably Paranoid: Resisting Intrusive and Malicious Influences  https://www.amazon.com/Justifiably-Paranoid-Resisting-Intrusive-Influences/dp/1793449597?ref_=ast_author_dp