Early Establishment of Symbol Components

All sorts of involved material has been written about the development of cognitive precursors in prenatal people, including the functions that contribute to symbolization (and therefore language), and I’m not going to be able to do it justice in this summary; however, if you want to understand what has not been going well for someone, then you need to know at least something about what should have been happening.

So here goes.

You don’t have to read it now, but I will connect the section of my communication tutorial that details the full array of sensory systems that are available to us. This link (a) creates an easily available reference and (b) grounds the following uses of the terms “exteroceptive” and “interoceptive.”

Prenatal exteroceptive senses develop in the following order: touch, taste, smell, hearing, and sight. Red is distinguished. Objects receive enough focus to track them, which means that there must be some amount of engagement in figure/ground distinction.

There isn’t a whole lot of reliable information on the prenatal development of interoceptive senses, other than various suggestions that it seems like it should happen, and relatively speculative inclusions in descriptions of more general sensorimotor observations. Pain is sensed. Those senses that rely on gravity would be affected to some extent by the fact that the prenatal person is floating, so generalization to life in/on the outside would not be straightforward.

Crucially, though, we know that prenatals can extract at least some of the signal from out of the noise in their world, turning at least some of their time-stream into relevant information segments.

Learning to determine such boundaries is a massively big deal.

And then subsequently determining them is as well.

I am going to try to narrow things down a bit.

Let’s Make some Noise

Before you are born, when things go well, you have already started to learn how to:

  • distinguish your (proto-)self from your environment,
  • segment your auditory time-stream into classes of sounds, and
  • some other stuff (which we have already skimmed over).

Those are some pretty confident statements on my part, which I base on a great deal of other people’s methodologically sound research that crucially remains ethical despite having been conducted directly on humans across the lifespan (which is sometimes invasive to the extent of introducing microphones into the prenatal environment).

I raise this concern because we’re getting into some of the fundamental research that is iffier than I would like. That which crosses a certain line, we’ll simply do without.

Having declared that restriction, let’s continue…

Perceptual Organization

Here’s a dilemma:

  • perceptual organization is just too huge a topic to cover here; and yet,
  • it’s super important to the creation of an adequate understanding of what’s going on with ~GLPs.

So I don’t suppose that you’d want to read a tome and then come back to this?

No?

Okay, well, fine…

Let’s see what we can do.

Perceptual Grouping

Part of perceptual organization is referred to as perceptual grouping. This was described early on by Gestalt psychologists in terms of such functions as ‘similarity’ and ‘proximity’, where those functions are at least cognitively plausible (despite only being philosophically speculated).

This “traditional” model has been succeeded over time by the results of neurological studies. There are far too many of those for me to review here; however, one of the more recent is particularly germane to our discussion, and hella interesting to boot, the results of which indicate that for auditory information we rely on cues that are derived from natural statistics in an auditory signal’s spectrotemporal features.

Here’s how a cue is formalized in the footnoted work:

  • identify the spectrotemporal features to be measured (i.e., “including single frequencies, sets of harmonic frequencies, clicks, and noise bursts”);
  • measure pairwise co-occurrence statistics for those features (e.g., local strength of things like common onset, fundamental frequency, and “separation in acoustic and modulation frequency”); and then
  • define a cue as, “a stimulus property that predicts co-occurrence of pairs of features as derived from measured pairwise co-occurrence statistics.”

And here’s the cool part:

“We found evidence that the auditory system has internalized these statistics and uses them to group features into coherent objects.”

Cast in the vocabulary used in this discussion, we would say that (a) we develop a function that segments our perception of the auditory time-stream, and (b) those segments become and remain cognitively manipulable.

All of this helps to support higher-order functions, such as figure/ground perception, where again the traditional (formalist) explanations have given way to the likes of cognitive linguistic approaches.

Natural Helpers

Perceptual grouping is aided by the natural boundaries that our array of anatomical sensors define; that is to say, we receive light discretely from sound, and so on.

I don’t know of cases where one type of sensory input is transduced into a form that has the form of another sense; for example, there don’t seem to be dysfunctions in which sound input is transduced in such a way as to activate the optic nerve.

But when you hit perception processes, things can get messy. There can be issues with synaesthesia (and ideasthesia) and other stuff. I am really trying hard not to dig into another book-length topic, especially right here.

So there might be people who not only face challenges with segmentation, but who are trying to segment time-streams whose sensory channels have not remained discrete.

It follows that I don’t know of a person whom this describes, so I am going to leave it alone for the time being.

Head to the next stage of the tutorial.

Clyr Ink Press © 2020 (most recent update: 2023)

Policies and Terms

Email the webmaster.

Built with Sparkle.