The main Critique of Psychotherapy on this web page does not appear in the menus above.)
Welcome. This critical psychology website exists to promote science in psychology and psychotherapy.
Visitors to these pages are urged to provide a link to them. This page offers a major critique of clinical
psychology & psychotherapy that has been backed by major psychologists, including Alvin Mahrer and
Allen Ivey. This page also exists to promote good client care and good counselor selection. When/if you
provide a link here, you can use the address http://cyberper.cnc.net IN ANY CASE, LINKS TO THESE PAGES
ARE ENCOURAGED. Thanks. -- Brad Jesness, M.A.
This is the Brad Jesness CORE PSYCHOLOGY WEBSITE. For counselors, their clients, applied researchers,
personality psychologists, learning theorists, ethologists, clinical/counseling researchers, developmental
psychologists, peer counselor selection, and objective inventory/test scoring. UPDATED, October, 2010
Click here to show a Table of Contents (with jump links to major sections of this page!)
** CORE PSYCHOLOGY WEB PAGE **
Author: Brad Jesness, M.A. (early Member of the American Psychological Society;
early Member of the International Society for Human Ethology)
The contact address for me, the webmaster of this site, is firstname.lastname@example.org --
email@example.com is an email address which is not used (it's a spam trap).
Dear Web Page Visitor,
Welcome to the web page for a major critique of the counseling/"therapy" field; an ethological conceptualization of learning; and a programming algorithm so you may computer-score personality inventories on your own, including the NEWEST, MOST MODERN AND EASIEST WAY KNOWN TO MAN.
*** AND *** I am now offering my Peer Counselor Selection Tool for further testing and use.
THIS IS A WEB PAGE THAT HAS 3 DISTINCT SECTIONS:
The first 2/3 of the body of this text "page", or about the first 30 pages, is a MAJOR CRITIQUE OF THE COUNSELING/"THERAPY" FIELD. (It is written up as an "Ans. to FAQs" for clients.)
The next 10 pages is AN ETHOLOGICAL CONCEPTUALIZATION OF LEARNING. (This is from a cognitive-developmental perspective; the major findings on memory are utilized and incorporated into this perspective.) (This section is easy to locate because the text has WIDER margins than those of Section One.)
This was published in 1987 in a major ethology periodical. It remains accurate and timely especially now.
The final few pages of this text web page is an automatic test/inventory scorer. (This allows for the scoring
*BY YOU*, YOURSELF, of any psychological inventory, even those with 100+ scales or subscales.)
A link in this section gives you access to the free automatic Universal Inventory/Test Scoring
PLUS I offer visitors to this page free use of my Peer Counselor Selection Tool (all research-based;
successfully piloted; based on CPI scores). Links to this and the relevant research are here.
AND, if you are seeking some personal insights: Try the new Real / Ideal Self Checklist,
(For a brief statement of my views,
USE THIS LINK.)
Ans. to FAQs: Information and Precautions for Psychotherapy Consumers
"Information and Precautions for Psychotherapy Consumers": Part 1:
Most often, "therapists" have no special (and no meaningful) professional
role as *science-practitioners* (using any reasonable definition or
standard). (This is true whether you are talking about what lay people
commonly consider a full real science or how scientists define it -- in
basic ways the layman's outlook and that of the scientist are actually
quite similar on what constitutes real "science".)
Counselors/"therapists" are typically (BY THEIR BEHAVIOR, or modus
operandi) no more scientists than many of us.
Furthermore, there is an acknowledged LACK OF GOOD STANDARDS FOR
CONSIDERING A "THERAPY" VALIDATED. Division 17 (Counseling
Psychology) of the APA is now *AT PRESENT* trying to find better (and
acceptable) standards for considering a "therapy" validated. GIVEN THIS
AND GIVEN THE STATE OF THE FIELD OTHERWISE (AND GIVEN COMMON
INEXCUSABLE FAULTS IN STANDARDS-IN-PRACTICE *AND* INEXCUSABLE
RESEARCH GAPS) THIS IS WHAT I RECOMMEND:
AS A COUNSELING AND PSYCHOLOGY INSTRUCTOR THIS IS WHAT I TELL
PEOPLE SEEKING COUNSELING: Don't use a counselor/therapist who does
not subscribe to the basic tenets of Client Advocates. The positions taken
in the manifesto (quoted below) are so basic to good reason and good
honest service that I recommend people show a copy of the manifesto to
perspective counselors/therapists before choosing one. Ask: "Do you
subscribe to the positions stated here?" If the answer is "no," ask:
"what problems do you have with it." If there are any "problems," except
with the idea of paraprofessionals, THEN seek another helper. No one
should accept a counselor/therapist (at least one of his own choice) that
does not subscribe to such most reasonable and basic principles of
fairness, good science conduct, and good practice. Some of the very best
in the counseling field support the tenets of Client Advocates and any
good counselor/therapist should as well. It is in the interest of all.
The "Client Advocates" Manifesto:
"Client Advocates": It is a client and science advocacy group,
dedicated to furthering science standards and practices in the therapy
field. We insist on fair and proper representation of treatments and on
providing information about costly or limited treatment options available
to clients "up front". We believe options and evidence of their
efficacies should be provided to clients before they enter a course of
counseling or therapy. The various treatments and programs offered by
each professional mental health service provider should be outlined in
some detail in a booklet made available to clients. Only this would provide
reasonable information before the expense of and commitment to a course
Also, techniques or methods used that have NOT been clearly shown to
have efficacy AND validated for a particular, reliably-identifiable problem
type (i.e. showing blind inter-rater reliability) are NOT to be referred to
as "therapy." Correspondingly, when what is done is COUNSELING, the
cooperative nature of this should be made clear and it
should be properly represented, engendering appropriate expectations.
Counseling is considered a most noble cooperative endeavor, requiring the
most consideration, judgment, and intelligence. Those who are
well-adapted will be better counselors. For this reason, and considering
the rest of the evidence, counselors/therapists should have a long history
of good adaptation.
Moreover, Client Advocates believes daily standards in practice
should provide for on-going research (such as for the development of
reliable diagnoses) and this should be done within each large mental
health service agency. Furthermore, basic foundation research
definitively showing that graduate-school-trained counselors are
superior to other sources of help must be done to establish the range of
problems for which special treatment by professionals is actually better
(and not inferior to other more accessible and less costly sources of
help, e.g. peer counselors or paraprofessionals). Client Advocates also
supports (given at present there is no evidence against it and some good
evidence in its favor): peer counseling programs and counseling programs
for paraprofessionals. Client Advocates seeks to demystify
mental health professions and rid it of great myths. We hope for a
sensible, delineated mental health care SYSTEM, with the care often
involving peers and paraprofessionals and for care to be provided by
individuals within a client's working community.
PURPOSES OF THE ORGANIZATION:
Client Advocates functions as a support organ for people standing up
for what's right: what's right for themselves if they are a client and
what is right for the science and the field. WE encourage and support
each other. We can specialize as need be or work together. BUT: All
initiatives are INDIVIDUAL. Clients are encouraged to seek what is right
for themselves by asserting the tenets of the group's manifesto.
Researchers work toward and promote needed foundation research and the
exploration of new untested ideas. They also work for more of a true
science-practitioner role for clinicians.
Part 2: Other FACTS and INDICATIONS that (unfortunately) Make
"Therapists" Mad (It's not just ideas !)
A major set of FOUNDATION research studies for the
counseling/"therapy" field has not yet been done. AND indeed, ONLY 3
CONTROLLED studies (the last in 1979 !) have been done comparing the
effects of counseling from professionals *with* counseling from "other
reasonable helpers" (with no professional grad. training). THIS, in
spite of the fact that these best studies in the area essentially show
that other REASONABLE helpers do as well for arguably a broad range of
problems. These studies, at the same time, indicate the other helpers are
an ethical comparison group, having been found *good* for a broad range of
problems for which counseling is most often sought. More recently much
research shows peer counselors in colleges to be VERY helpful (though
their performance is NOT directly compared to that of professional
helpers in these studies).
ANYWAY, these studies are NEEDED to show where professionals ARE
really needed AND where treatments need to be developed (as is, this
situation REMAINS VERY UNCLEAR). These studies might well also indicate
the desirability of other mental health care provider roles (like well
selected and well-trained peer counselors and/or more extensively
Now to the "ethics" matter (the first defense of the many backing the
status quo in the field): Not only have other reasonable helpers been
shown effective for a broad range of problems in past studies, BUT ALSO:
"other helpers" (peer counselors or "paras"), used as a comparison group
to professionals (professionals who are licensed & grad.-trained), would
ETHICALLY only have to be NO WORSE than the NO TREATMENT groups (or
waitlist control groups) used today OR NO WORSE than the placebo
controls used today for the study to be considered ethical. *AS WITH* the
types of studies now done, clients treated by peer counselors OR "paras"
could be offered professional care AFTER the study. (Today waitlist
people wait up to around 3 months for treatment -- they just wait until
the other exactly equivalently disturbed group is treated.)
AGAIN: Without these studies we do NOT KNOW where professionals are
really needed or most needed. Areas where treatment developments are
most needed are not being identified. (I hope readers appreciate these and
other LIKELY negative effects ON CLIENTS of an inexcusable LACK of work
in certain, basic areas of FOUNDATION RESEARCH.) Also, a reasonable,
delineated mental health care SYSTEM (with a variety of helpers or at
least specializations) is NOT being developed. IT REALLY CAN'T BE FROM
ONE STANDPOINT: *BASIC FOUNDATION* RESEARCH IS *NECESSARY*.
There are many things about which one cannot conclude without clear
The system, as is, is irrationally defending the status quo and IS DENYING
us good health care.
Something else that bothers "therapists" is when one describes, in
comparable terms, the strength of the average result with therapy!
Frequently the shift with therapy IS ONLY AS BIG as the DIFFERENCE you
find between the *average* man and *average* woman on certain
personality inventory scales. And the behavioral shift or results of
"therapy" are only HALF as much as the difference shown between the
average man and average woman on some personality inventory scales.
AND, you would be lucky to do this large a shift with the hard to treat
disorders or with hard to treat problems. For example, with anger, the
average effect (or *shift*) from therapy will, IF YOU ARE LUCKY, be about
HALF the size of the difference between the AVERAGE man and AVERAGE
woman ON AN "aggressiveness" personality scale ! ALSO virtually all
these results are based on comparing the effects of therapy against
the effects of NO TREATMENT (i.e. neglected clients) or clients treated
with placebo treatments, THOUGHT TO BE INERT ! If you compared the
results of "therapy" with the results you would *OFTEN* (and probably,
*usually*) get with help from ANY OTHER REASONABLE non-professional
helper, the results of "therapy" would be MUCH MUCH less. ALL the BEST
evidence indicates the increased results of "therapy" are AT MOST very
scant when compared to the effects of help from others. This evidence
indicates in many, if not most cases, professional "results" would show no
difference from results obtained with help from the other helpers.
Then one can ALSO go on and show indications in the research
literature that the results of efficacy studies may be TWICE as strong in
some studies as in others -- AND APPARENTLY RELATED TO THE TYPE OF
CONTROL GROUP USED !! (see Anderson and Lambert, 1995). J. Frank has
argued that waitlist control groups (and other "no treatment" control
groups) are neglected clients, that may do less than what a normal person
would do to seek help while "waiting." THIS may account for the TREND
that studies using waitlist controls *MAY* show up to TWICE the amount
of change [supposedly] due to therapy, as placebo control groups (where
the placebo treatment is thought to be inert !) Significant results here
*may* only await meta-analyses examining larger numbers of studies
(again, see Anderson and Lambert, 1995, for the trend).
Part 3: Other Facts and "Dangerous Ideas" about "Therapy":
Therapists are surely lousy scientists. Their time may not be worth
the money unless you are unsophisticated AND out of control. It is
arguable that clear, well validated techniques exist only for some anxiety
disorders, some depressive conditions, and a few other problem
conditions. It is only in these minority cases that some mental health
care providers can even be seen as operating reasonably IN ACCORD with
science. Otherwise, visiting them *may* itself be ill-advised. Clinical
psychologists do not have the discipline to establish good operational
definitions WITHIN AGENCIES (e.g. for defining (i.e. diagnosing)
personality disorders). NO PROGRESS CAN BE MADE UNDER THESE
CIRCUMSTANCES (and many other similar problems-in-science cases).
Because they are not scientists they cannot progress OR really work
well together. They cannot self-evaluate. DSM criteria are so far from
good operational definitions, I would not dignify them with the word
"criteria." I know of no counselor or agency that has made any credible
attempt at scientific respectability (or any that could be argued to be
doing such). It is simply pitiful and inexcusable. Practice, as is, is
actually an abuse of power and taking advantage of vulnerable
populations. Someday such practice may result in law suits. Using
diagnostic procedures that do lead to excellent inter-rater agreement is
certainly possible today, not only at some level but at a useful level.
At present counselors and therapists don't even respect each other.
Regarding the therapists' major guide for objectivity, the Diagnostic
and Stat. Manual of the Amer. Psychiatric Assoc.: It is without question
that one could develop criteria-through-procedures that show MUCH better
inter-rater agreement than the DSM. The last time the Amer. Psychia.
Assoc. published and reported COLLECTED reliability data (within the DSM
itself (DSM III)), there was only a r=.7 correlation between clinicians
AS TO WHETHER a client had a disorder in the Mood Disorder GROUP (or
NOT). SIMILARLY, there was an equally low level of agreement on whether
a client had a disorder in an Anxiety CATEGORY (or NOT) (quite
inadequate!!). (Often there is disagreement on whether a disorder is an
Anxiety Disorder or a Mood Disorder.) AND this is all beside the issue
that the "diagnoses" are possibly good for very little and possibly often
more destructive than constructive. VERY VERY little work was done
investigating the inter-rater reliability of criteria *between* DSM-III
and the meeting of the DSM-IV committee to define "new" diagnostic
"options." In fact, only 14 of the top 40 diagnoses had ANY inter-rater
reliability data generated on their criteria in the 15 years since DSM-III
(source: DSM-IV Sourcebook, Vol. 2). Judging by the "new" ICD-10
criteria and their inter-rater reliabilities, we can expect the DSM-IV
diagnostic criteria to show little better inter-rater reliabilities than
DSM-III (the DSM-IV criteria were made to be very similar and consistent
To comfort us in some way a number of therapists say "we don't like
diagnoses either." A GOOD RETORT:
I don't care about diagnoses, but you still need good definitions
THROUGH THE PROCEDURES YOU USE within an agency to have the minimum
science standard -- decent inter rater agreement. Otherwise you cannot
discuss anything clearly with any others (you can't communicate). I am in
no way comforted by the INDIVIDUAL therapist making his decisions in
idiosyncratic ways, with way too little accountability. (It is a
principle: power corrupts. Without accountability or communication you
will have an inappropriate degree of power BECAUSE it is in no way
appropriately negotiated, sanctioned, or scientifically monitored.)
Since I am trained in psychology myself I know what is meant when it is
said that therapists are "trained in scientific methods." Trouble is
they engage in no regular (much less integral) scientific PROCEDURES in
the normal or typical conduct of their work. This is true to such a
degree it is unacceptable. And it is true of all therapists I know of.
Again, their failure to develop operational definitions of personality
disorders that at least show excellent within agency inter-rater
reliability is an excellent illustration. There is correspondingly a lack
of proven agreement on the application of procedures (loosely called
"therapies") and on the assessment of results IN actual practice. The field
itself recognizes deficiencies in how "therapies" are considered
"validated." (Obviously with this problem most treatments should NOT be
The fact that the idea of scientific procedure INTEGRAL in a therapist's
daily work makes no sense to many therapists is not surprising. THERE
ARE NONE!! I would hope you could see a problem there. While
psychologists hear a lot about scientific methods, they do not learn to use
them in an integrated and realistic way (even in the "ivory tower"). No
wonder when the controls of grad. school are gone and no others exist (as
it is with most therapists), even the mock "science" behavior no longer
Further, they are not only lousy scientists, but arguably not the
more well-adapted people. Internal Locus of Control predicts high quality
adaptation in many areas AND its opposite predicts maladaptation (e.g.
depression, anxiety, under-achievement...). YET, Locus of Control does
not correlate with who becomes a counselor. And, L of C is not correlated
with "effectiveness of counselors-in-training" (and this is an empirical
fact), likely because it is not correlated with those that go into the
field in the first place (this is certainly a possible and very likely
explanation for this finding). Elaborating on this idea: to "appreciate"
counseling it is not unlikely that you are a dependent type, like the
unfortunate individuals counselors/therapists make their living off of.
This also may account for why so little initiative and good work is done
in the field by individuals and individual agencies. Unpublished data
(which according to Dr. Scot McNary is now published) on one large
sample of therapists shows that MOST had MUCH therapy
themselves (quite possibly 10 times that of an average client receiving
"therapy"), and MUCH if not MOST of this after they become licensed
practitioners. From that research, it is not clear whether this is a sign of
some sort of elitism or significant maladaptation.
In any case, it appears "therapists" simply lack independence. This
combined with only cursory exposure to science and NO science activity
integral to daily practices makes "therapists" a highly suspect group. I
have outlined cautions any reasonable consumer should take.
By the way, no one has shown that professional counselors/"therapists"
are necessarily especially good at developing empathetic relationships,
though much of their training is here. THE REASON: Empathy may be more
the product of good development and successful adaptation than it is a
learned thing (i.e. it may not be the product of school learning). There are
some indications that those going into counseling/clinical work are more
maladapted than the average person (I have cited 2 big indications, above).
Thus, the training (empathy-through-school learning) may not even make
up for the deficiencies of responding many clinical people have because of
poor adaptation. Often the poorly adapted sense the emotions of the other
*but react defensively* and *cannot maintain the perspective of the other
(the client)*. This is why good selection (better procedures than those
used today) is likely essential for getting good counselors.
Therapists can be (and I believe often are) nasty people when they
don't have your utter cooperation (which you pay for). If you're not
confused and uneducated you maybe should not even consider a therapist.
AND IF YOU ARE inexplicably confused consider psychiatry (drugs). Whether
it's physiological or psychogenic (and the line between the two, with
time, certainly could become blurred for more than 1 reason) you can't be
too far gone (confused) for a psychologist. They require everything be
optimal for their "magic" to work. Friends may not be as good AS SOME
counselors hour for hour, but you can afford more than a few hours with a
friend. (Research has shown that, for a rather general subgroup of
college students-who-felt-they-had-problems, being assigned to a
listening professor (in any field) was as good as meeting with a counselor.
Thus even hour-for-hour a friend might be as good. )
There are other more concrete or apparent problems. Therapists so
much expect your cooperation (or to control you) that they will not even
take any amount of time to explain their approach. You do not know what
kind of program they will use typically until after you start. It will
therefore take at least a session (possibly $100) to find out. This is
neither fair nor therapeutic. Why should they expect business only by
reputation? No one else who will earn many hundreds of dollars from you
(if you do just what is usual) will not explain what they are going to
do. You wouldn't accept this from a carpenter (and don't for a minute try
to tell me their skills are not at as high a level). I suspect that the
people therapists typically get as clients are those they can keep no
matter what. What therapists typically do is a totally unsatisfactory
business policy and I think it is unprofessional and unethical too. It
was, in good part, in response to this concrete problem that a major
portion of the manifesto was written.
** FOOTNOTE to Part 3: The way progress in developing "better"
diagnostic criteria proceeds today illustrates what is wrong with the way
things operate and are done today. It displays the lack of appreciation
for the grassroots INDUCTIVE work that, it seems to me, has to be done.
True, the "diagnostic options" decided on by the DSM COMMITTEES every
decade or so *are tested* AFTER THE FACT for inter-rater reliability (AS I
MYSELF AM AWARE, and as I indicated in the essay). BUT, the problem is:
Do you wait for rare committee meetings to try to piece together a set of
best "options" on a relatively rare basis OR do you strive for better
reliabilities for criteria *AND better criteria* more often, on a more
local level ?? Yes, PEOPLE must first have "guesses" about what might be
better criteria and *then* investigate them. BUT this need not ALL be
done by rare committees ALONE doing this work. Doing virtually all such
diagnostic development work JUST by committee (meeting every decade
or so) is loaded on a hypothetico-deductive side as opposed to a
grassroots, more inductive, discovery (and yes, trial and error) approach.
One could argue that INDEED you DO (and MUST) **DISCOVER*** the
better criteria, rather than formulate them en masse in our heads during
"big committee" meetings. What our present attitude suggests, and what
is done now, is the figuring of nature out in our heads and then (only
afterward) testing your limited range of relatively constrained ideas.
Wouldn't it be better for some local consistent (**everyday**) work to go
on to find criteria that are understandable and show (demonstrate)
inter-rater reliabilities and also relate to disorders? SHOULDN'T THIS AT
LEAST BE DONE *IN ADDITION* TO THE inter-rater reliability work
associated every decade or so with "committee work" ? PRESENTLY THIS
IS NOT DONE, AND I WOULD ARGUE IS ONE BIG THING THAT HOLDS UP
DEVELOPMENT OF THE FIELD. We are basically both being pompous and
pretentious, while at the same time abdicating basic science
Part 4: Specifically:
How good are Clin. Psychol. at BASIC Diagnoses ?: OFTEN NOT worth a Damn
It is interesting to compare standards of inter-rater agreement used
within the field of clinical psychology ITSELF. On several major
diagnoses (where collected data is available), the inter-rater clinician
agreement ON diagnoses is not as good as a standard sought for agreement
on the "coding" of client responses to INK BLOTS. Let me give you some
details, just to put things in perspective: A professor in Alaska named G.
Meyer is doing research on the inter-rater reliabilities of standard
coding of client responses to **INK BLOTS** (on the Rorschach). Here is
what he says about inter-rater reliabilities: "Typically Kappa values
above .60 are considered good agreement *beyond chance*, while values
above .80 are considered excellent agreement *beyond chance*. " [(The *s
were inserted by me.)] NOW, this is a guy evaluating reliability (across
raters) of what client responses to *INK BLOTS* mean (in standard
"coded" terms). Isn't it amazing that 13 out of the 27 specific ICD-10
diagnoses I list or mention BELOW (*leaving out ONLY childhood, organic,
and substance abuse disorders*, among the listing of specific disorders
provided by the Journal cited) -- or about half of the diagnoses -- DO
*NOT* meet the Kappa= .6 standard of inter-rater agreement ? We cannot
diagnose people as well as we can sometimes agree on what ink blots
mean !!! Is this "okay"? I don't think so !!
Here is the report of data that I refer to above:
Though DSM-IV field trial data have not been put together in
collected form yet, we can see the likely inter-rater reliabilities of
diagnoses by looking at ICD-10. DSM-IV was constructed to be VERY
consistent with ICD-10.
Here is some data from the Amer. Jour. of Psychiatry 152:10 (Oct.
1995). These are Kappa (correlation) coefficients of inter-rater
reliability (agreement) of clinicians ON **SPECIFIC** ICD-10 COMMON
Panic disorder .64
Generalized Anx. Disorder .59
Adjustment disorder .58
Somatization disorder .56
Hypochondriacal disorder .56 (I would rate all this as POOR)
Histrionic disorder .25
Anxious pers. disorder .33
Schizotypal pers. dis. .48
Mixed anxiety and depr. dis. .09 (I would rate all this as "terrible")
Now, below, the ones showing better reliabilities from those same pages
(pp. 1431 - 1423) of the Journal report. Above and below, I am presenting
data ON **ALL** SPECIFIC *ADULT* disorders from that Journal, with a
few exceptions I have noted -- THUS, I am not attempting to bias or
neurasthenia (a neurotic
obsessive-compulsive dis. .85 (I would rate these "fair")
In fairness, I should *again* note that along with not including organic
disorders in my listing, I did not list substance abuse disorder figures
or disorders appearing in childhood. Also three particular rarer
somatoform disorders with inter-rater Kappas of .43-.56 and 2 sleep
disorders (with k=.62 and.55) were not listed *in the tables* above,
*BUT* I included them in my summary remarks about the data (above).
Here also, for your curiosity is some older data from the DSM-III (1980)
field trials (many of the diagnostic criteria did not change much since
schizophrenic disorders .86 <--- this is still less than the
minimum correlation tolerated for
classification work in ethology
paranoid disorder*S* .66 (note: here, this is *agreement* on
whether the clients had any disorder
in this small class of disorders OR NOT)
Major affective disorder*S* .69 (ditto the note above)
Anxiety disorder*s* (THIS IS agreement on whether patient has *any* of
this *larger* class of common disorders OR NOT):
Concluding Remarks: Clinical psychologists, grow up. Do the ACTUAL
science -practitioner work in your individual agencies WITH other
professionals, that I have previously described ! I told you what must
change AND HOW so ANOTHER 15 years don't go by again with hardly any
progress !!! Be real science-practitioners; stop just pretending,
misleading people and misrepresenting things. You are neither doing the
best you can NOR are you doing "ok".
Part 5: A Scary Hypothesis:
"Bolstered False Pretense Leads to Significant, Habitual Abuses of Power"
Does "bolstered false pretense" lead to anti-social behaviors ? I
believe so. A prime case of what I mean: My observations (along with
some psychological principles and logical reasoning) have lead me to
conclude that people who are non-scientists, but pretend to be scientists
and/or stand on the pretense of being scientists, end up showing many of
the behaviors of those with anti-social personality. (One might note
that they quite possibly themselves believe or come to believe they are
scientists to a degree way beyond anything that is justified by any
normal OR reasonable science standard). Specifically, I believe they
strongly tend to show moral degradation and inordinate non-critical
self-acceptance, like those with anti-social personality. In many
circumstances, they learn slowly (IF AT ALL) to show more appropriate
behavior. Plus, if sensing power or "in-group" status, they show great
pretentiousness and defend it to the point of using deception
(intentionally; a "the ends justify the means" mentality).
I would add that the characteristics I outline are VERY similar to some of
the criteria for anti-social personality, and some of the "associated
features" of this disorder. The rest is consistent with some of data found
about anti-social personalities in the research literature.
I believe that, like the characteristics in the anti-social personality, if
one exhibits some of the characteristics, one will much more likely than
the average person exhibit other of the characteristics I have described
(under suitable eliciting circumstances). I believe the environment
engendering the "bolstered false pretense" is a developmental context
similar to some yielding the anti-social personality, and this syndrome
(and I do see it as a syndrome) is in many aspects a variant on the
socialized anti-social (though on a different "social status" "plane")). The
syndrome indicated by the set of characteristics described in my
hypothesis is consistent with the disorder construct (that the Amer.
Psychiatric Assoc. is so fond of using). It is also consistent with a trait
or dimensional interpretation if the trait is understood as a complex one
with many facets. The hypothesis is consistent with social psychology
principles. What appear to be instances of phenomenon described by this
hypothesis seems to well explain what has been seen in the newsgroups
and on at least one mailing list.
I strongly suspect numerous cases of clinical power abuses could be
well-explained by my hypothesis. Again, my hypothesis is consistent with
social psychology principles and may also be seen as just a variant of a
general explanation for the development of anti-social behavior (this
special variant being different in ways BUT very similar in ways to the
regular development of anti-social behavior).
Clarification on my definition of science (THEN, BY its absence, what
would be "pretense to science" -- a major socially or societally
"bolstered false pretense" -- will be fairly clear):
My definition of minimally acting "in accord WITH science" is: that
one clearly makes use of a set of procedures (yielding determinations and
clear actions), from diagnoses through treatment through assessment of
results, that *ALL* have shown or proven their inter-rater reliabilities
(start to finish at each step of the process). This is analogous to good
diagnoses and treatment in medicine.
To be a scientist one must WORK WITH OTHERS in an integral
(day-to-day) way so better and better and more and more inter-rater
reliabilities are shown. And, all this should be linked to concrete
things that have been agreed upon as important and such a system of
proceeding should be linked to more and more good consequences (results)
and account for (and allow control of) more and more phenomenon. Then
you would be a science-practitioner, one who PRACTICES science oneself.
The fact that many clinical psychology practitioners falsely and
wrongly imply certainty to their procedures and call what they do
"therapy" when it is actually a less clearly determined (OR AGREED UPON)
set of procedures *AND ACTUALLY SHOULD BE CALLED "COUNSELING"*,
shows itself the false pretense of the field.
This particular hypothesis *contains* the description of a syndrome
(similar in many features to anti-social personality). Given this, someone
has asked a very good question: "So, your hypothesis-cum-syndrome
-cum-antisocial-p.d. is now very similar to DSM-IV Antisocial Personality
Disorder, why not go with the real thing? Are you claiming that you have
discovered a NEW personality disorder that is very similar and for all
intents and purposes identical to an existing DSM-IV personality
disorder?" Answer: NO, I am not claiming to have found a new thing. I
view the syndrome as a variant manifestation and possibly a
"subthreshold" manifestation of the antisocial personality itself.
The kind of socially or societally "bolstered false pretense" I
describe, is a set of circumstances that is IN MANY WAYS similar to one
possible route to socialized anti-social personality, thus justifying a
hypothesis of similarity in phenomenology and effect OF these
developmental circumstances ON CLINICAL PSYCHOLOGISTS.
Ans. to FAQs: Information and Precautions for Psychotherapy Consumers (continued)
Complaints of *GROSS NEGLIGENCE* Should be Filed Against the APA
The nature of the overall complaint is directly below. More about
the specific deficiencies leading to the complaint are below that:
THE POSITION AND GENERAL COMPLAINT:
It is my view that as surely as we hold the medical establishment
(e.g. the CDC) responsible for epidemics of physical health, and as surely
as we hold the Fed. Trans. Safety Board responsible for airline safety,
the Amer. Psychological Assoc. has some notable responsibility for
services needed if they are not being provided.
With the present critical rise in teen suicides, I question whether
all that is reasonable and responsible is being done for the provision of
a system of mental health care. If a problem goes unabated, the answer is
(as it would be for other agencies or institutions): "NO". I have argued
that the psychological establishment is *grossly* negligent in providing
encouragements for basic foundation research that would point up special
needs where professional work is most needed (and thus lead to the
development of better care through reasonable specialization) *AND*
would lead to the development of a SYSTEM of mental health care
providers AS IS REASONABLE, with a variety of roles. I also see the
organization failing to support the development of a common and
reasonable science-practitioner role that IS arguably THE ONLY THING that
will advance the science generally. I have clearly outlined the nature of
problems and deficiencies and they are not being addressed. There is a
general self-satisfaction OR a willingness to improve only on their terms;
clinical psychologists and their major parent organization show no
willingness to make many improvements OR even any willingness to do
investigations that may lead to more reasonable specializations OF ROLES
in the field and improvements in services.
The pompous presumptuousness and pretentiousness of clinical
psychologists, I say, is beginning to kill our children. We've seen the
record of the APA and its members on mental health care for a long time
now. How long are we going to give them ?? At least we can see the rise
in psychological problems unabated as a sign that CLINICAL psychologists
are not doing a good job, can't we? OR SHOULD THEY BE ALLOWED TO
SIMPLY TAKE RESPONSIBILITY (AND CREDIT) AS CONVENIENT?? NOBODY'S
GOING TO LET THEM DO THAT. Snake oil "works" sometimes, too.
Although the APA is not a government institution my point largely
stands !! Especially since there is no government agency in the role of
"policing-of-needs" job for clinical psychologists. Some say "the APA is
just a professional organization." Not so; it is an accrediting body for
programs in higher education !!!! Think about this situation:
How would you like it if the standards of good research and training
for a profession were accredited (and monitored) by a private guild union
out to promote its membership AS IS? How would you like this to be true
when the standards of self-representation and practice of this
professional group has been (for a century now) based in good part ON
MYTH? How would you like it if the training programs I'm referring to
take place in UNIVERSITIES, in programs ostensibly offering the best (and
"highest level") of training? Well, this is precisely the case with
universities and the American Psychological Association. This private
lobby organization ACCREDITS programs of public higher education at the
highest levels. It is also very clearly possible to show that basic
foundation research has not been done to establish the counseling/therapy
field -- to show where it stands, how services offered compare to the
help that might be provided by other reasonable helpers AND related
research that would show where special techniques and treatments really
need to be developed. It is very possible to argue that there is gross
negligence in the reasonable provision of treatment because of self
protective presumptions on the part of the group and their lobby
organization, the APA. The APA, I believe, is now culpable for gross
negligence in failing to make reasonable provisions for care and could be
sued by families whose members have suffered from needless low levels
of care and from unreasonably unspecialized care.
What the guild lobby and union (the APA) has done is to secure a
*legal* status for its members (i.e. they have lobbied for a position of
power in society and have to some large degree achieved that). Only
people with certain higher ed. qualifications (and a license) can
represent themselves a "psychologists," offering services clinically. Now
this would be fine, but this licensure, ETC. is no guarantee of quality in
standards or practices OR quality in education !! And the APA is doing
an absolutely awful job of securing good research and (relatedly) good
education and standards in-practice.
It is my view that the point has been reached where clinical
psychologists do more to hold up AND PREVENT needed services and
science progress than they do good. The APA basically facilitates this.
Others could step in and likely do most of the job clinical psychologists
And I do hold the APA and clinical psychologists responsible for
deaths already. There has been more of a concern about power and politics
(e.g. licensing and laws giving clinical psychologists exclusive "turf")
than there has been concern OR INVESTMENT for developing an appropriate
tested science. Now I know they have their myths, convenient concerns,
and rationalizations, BUT THAT DOESN'T MEAN A THING. These
"professionals" have all the trappings and glory a group could possibly
have and be so useless. I have indicated basic foundation research
needed and why. The points are OBVIOUSLY unassailable, except on
grounds of presumption, myth, and pretense. All clinical people
(INCLUDING THE former APA PRESIDENT, SELIGMAN HIMSELF) try to do is drag
down (they hope once and for all) the few studies that have been done on
some important questions. AND THEY ARGUE SO ALL MIGHT BE SATISFIED
WITH POOR SCIENCE ! They like to discount gaps in research by citing the
presence of *other largely unrelated* pieces of research. Well, in real
science, one thing does not make up for ANOTHER !! They support basically
whatever studies SEEMS TO BACK THEM (pick and choose). They are
NEEDLESSLY SATISFIED WITH poorer studies than could be done and
with studies that are less than that which would be needed to back
The research on the question of professional efficacy vs. other
helpers (not yet well answered at all and which has been neglected for 17
years, since the last good study) is pivotal to providing a sound basis
for the creation of a system of personnel as mental health resources
*AND* to finding the truly difficult problems where professional work
should be concentrated. AND: Nothing has been done AS APPROPRIATE,
given the nature of the subject matter, to set up a system that would
yield continuously improving inter-rater reliabilities on many fronts. The
local science- practitioner role I have outlined would do this, and having
counselors/therapists become such practitioners should be part and parcel
of graduate training. If each training institution had some
*continuously* ON-GOING specialized research, this would also help
greatly (though this alone may not suffice).
"Therapists" are often nothing but a bunch of self-serving, hacks.
They practice "science" (act. technology, or an "apparent form" of such),
as convenient (usually as an isolated incident to get their degree), and
otherwise make claims of "art" as convenient. Well, it is all both at the
same time. The emphasizing one or the other as convenient for propaganda
has to stop. Adaptational problems are becoming more rampant in our
society. Children are dying by their own hands. Maybe clinical
psychologists don't think that is worth talking about, but others may.
P.S. It will not be long before even those without clear knowledge of
where the "science" is, of the short-comings of the field, and of the
active avoidance of science foundation research, come to view clinical
psychology as deficient. When Skinner's ideas for learning (after
Sputnik) were not thought up to it, look what happened to him. Given the
APA and clinical psychologists show insufficient initiative in the field,
possibly the psychiatrists will end up with the real job. It really
doesn't take people who "know much", because no one does. It may (given
how crude things are) just take caring psychiatrists to turn things around.
Maybe no one will turn things around.
Nature of Deficiencies Seen in research and practice, leading to the
Review of argument, in brief:
The modus operandi is wrong for good continuing (integrated) research on
several fronts and real science. The RECORD shows us more here: Just
one example: The DSM committee had very, very, very few studies on
inter-rater reliabilities of diagnostic criteria to look at before they
came up with DSM-IV options. Look at the 40 most common diagnoses
(DSM-IV Sourcebook, Vol. 2): the committee had far less than half that
many studies on the reliability of criteria (and this is even though there
are multiple criteria (often around 8) for each disorder). This is very
telling. ARE WE GOING TO LET THIS HAPPEN AGAIN BETWEEN THE DSM-IV
AND DSM-V???? I have argued for a true science practitioner role, where
clinical psychologists (typically) work together regularly to develop (on a
continuing basis) better and better inter-rater reliabilities about
diagnoses (or behavioral problems), treatments, and assessing outcomes.
The fact that only 2 sorts of control groups (placebo and waitlist)
are used is very telling. This leaves all questions about the efficacy of
"professionals" vs. other reasonable sources of help unanswered and IN
ONE MAJOR WAY does not allow us to locate the most serious problems
objectively and concentrate professional resources there. Where new
treatments need to be developed they are not. Also it does not allow us
to to get indications about the utility of briefly trained peer counselors
or "paraprofessionals," so as to provide more helpers. That way we could
have more help and more accessible help where possible and prudent. The
negligence is overwhelming. ARE THEY GOING TO CONTINUE TO FAIL TO
RATIONALLY INVESTIGATE PROVIDING A SYSTEM OF MENTAL HEALTH
PERSONNEL?? ARE AREAS, WHERE A CONCENTRATION OF EFFORTS ARE
NEEDED TO DEVISE NEW OR BETTER TREATMENTS, GOING TO REMAIN
UNFOUND??? Pretentious clinical psychologists are our killing children
right now. They have been exposed. The sort of answer needed has been
outlined. Someone will bring this field down soon if changes are not
Complaints of gross negligence should be filed against the APA
frequently. Only by hitting "deep-pockets" can we force and assure
Some Concluding Remarks:
HMOs are squeezing out mental health care AND it is the field's own
fault ! Why? Two big reasons: (1) Basic foundation research to show that
professionals are better than other reasonable sorts of helpers (e.g. well
selected and trained peer counselors or "paras") have not been done. The
last well controlled study done on this was done in 1979 and it showed
the other reasonable helpers just as good for a broad range of problems.
Supposedly progress has been made, but the field is too cowardly to test
themselves or establish the basic foundation for their field. I have
argued how these studies, using more briefly trained (and well-selected)
"other helpers," are just as ethical as many of the "efficacy" or
effectiveness studies done nowadays. AND, I have ALSO argued how these
studies ARE NECESSARY to help illuminate those problems for which
special methods need *yet* to be developed. The counseling/clinical
psychology field has let us down. We are all suffering because of this. And
because of the low credibility of the field and the lack of clear results
and established procedures in many areas, HMOs are providing GROSSLY
inadequate mental health care. AGAIN, it is the field's fault, basically
for the reason just outlined above AND because of (2) (below):
(2) The field has yet to establish any common sort of REAL
science-practitioner role that would result in MUCH MUCH more work
being done to show inter-rater reliabilities to diagnoses (or problem
identifications) and with subsequent treatments and results. Many in the
field of counseling/clinical psychology LARGELY FALSELY claiming a
"science-practitioner" role. This is typically at least largely a fraud.
Typically clinicians do not work within individual agencies developing and
showing any inter-rater reliabilities. Because of this, we KNOW they are
not science-practitioners. I have gone on to argue that they do not even
operate IN ACCORD with science, for there are too few studies and FAR too
few continuing studies for there to be clear procedures to model for most
problems. Thus, for most problems, "therapists" do not even operate IN
ACCORD with science (like your local M.D. most often does). Being an
intelligent "science reader" and extrapolating idiosyncratically from
studies does not make one a science practitioner, in even the loosest
sense. To be a science *reader* is not even a particularly professional
activity; many intelligent lay persons can do this (and EXTRAPOLATE AS
WELL). BECAUSE THERE ARE OFTEN NO STANDARD, ESTABLISHED
PROCEDURES FOR PROBLEMS, HMOS ARE GOING BY THE MORE EXTRAVAGANT
CLAIMS OF SUCCESS, DENYING MANY OF US THE NEEDED CARE. At the same
time, some problems are not being treated because they are not seen as
treatment worthy or treatable. THE **FIELD ITSELF** IS TO BE BLAMED.
For additional well-justified and well-supported criticisms of the
counseling/"therapy" field, see House of Cards by Robyn Dawes (1994).
This book is now available in paperback from The Free Press (N.Y., N.Y.
1996). The author is a former clinical researcher and the book is
thoroughly grounded in the research literature. One thing Dawes describes
is the clear, large body of evidence that "clinical judgment" is
virtually never helpful in predicting the future behavior of clients.
Only formal (standardized) sorts of assessments have been shown to be of
any predictive value. Over 140 studies on this matter *virtually all
show* that "clinical judgment" either doesn't help *OR hurts* the ability
to predict several client behaviors (one behavior that is especially
noteworthy here is violence).
Please see the Addendum to this "Ans. to FAQs"
Addendum: "Ans. to FAQs": Info. and Precautions for Psychotherapy
A D D E N D U M
Addendum: other serious concerns about how the APA is FAILING
to well-serve science (and clients)
I have some serious concerns with regard to "quality of research
matters" and the APA. Number one: a major figure in the APA, IN FACT
its former president, Martin Seligman, represents biased survey findings
from a Consumer Reports study (that seem to supply favorable results for
the counseling/therapy field as it is today) as being valuable and offering
a lot of good information when they do not. He indicates that this sort of
study is THE best study for some purposes, when IT IS NOT. This major
APA figure, expresses satisfaction with such a study and represents it as
a good study, considering what he thinks we can do. It is hard to
understand how he could be so mistaken. This Consumer Reports survey
was on satisfaction with, and persons' appraisals of, the results of
therapy of differing lengths (self-selected). This study was done with
Consumer Report readers -- a select group. Not only is this study clearly
NOT unbiased, but contrary to Seligman's assertions, a much better
(controlled study with a good, reasonably broad sample) study could
ethically be done. I discuss all this at length (below).
My second concern about the quality of research presented to the
public (and to professionals) is even more grave. There is clear evidence
that a study that misrepresents the variables it has examined AND
misconcludes from those has been accepted for publication AND published
in a major APA journal. Only studies this poor that have results
seemingly favorable towards the field would ever be accepted. The study
is so poor and the conclusions are so misrepresented and so unjustified it
is hard for me to view it as anything but fraudulent. I discuss one such
instance of a study with the Stein and Lambert "meta-analysis", in the
Spring, 1995 Jour. of Consulting and Clinical Psyc. This study
mischaracterizes its sample and greatly misrepresents its results. Most
of my Addendum below reviews this study to make completely clear what
I am talking about.
I. First the matter of NEEDLESSLY being satisfied with low grade
(likely biased results).: In the December, 1995 American Psychologist,
Seligman cites the Consumer Reports survey as a basically good sort of
study to get at things that cannot be assessed by conventional efficacy
research on therapy. The research we are addressing IS the effect of (or
efficacy of) professional "therapy". To me, the main and key practical
question here is: does therapy (what people are professionally trained in
over many years in graduate school -- whatever this is, really) HAVE a
significant and intended helpful effect AS COMPARED to what might
otherwise occur with other reasonable ("lay") helpers. Many existing
studies compare therapy only to no treatment: no decent modern controlled
studies compare professional treatment to other reasonable sources of
While Seligman may be correct that traditional (or at least usual)
efficacy research does over-look important real world factors in efficacy
research, the CR method is far from being the best solution (as he
indicates). I shall indicate a better way of getting more naturalistic
research results (and at the same time doing better science) and speculate
on why such a solution is avoided. It is also a way of getting other
vitally needed information.
Seligman states that the CR survey study, though potentially (if not
likely) biased in several ways, uncontrolled, AND though using measures of
questionable meaning, is nonetheless the best way to get at certain
"naturalistic" conditions and assessing "real-world" efficacy of therapy.
He says in essence: given the nature of clients and therapy and how long
it goes on and how it is selected and the multi modal techniques commonly
used (changing to suit the client) and how all this actually occurs in
real therapy, the survey is THE way to get at just those. He simply
suggests better and more frequent measures and some blind measurements
taken, otherwise thinks the CR survey is a very fine way of doing things.
This is simply completely incorrect.
Doing CONTROLLED research like was done by Strupp and Hadley in 1979
(AND NOT SINCE) and using control groups in such research as have been
outlined by me in my "Ans. to FAQs" (that is, reasonably selected and
trained peer counselors or "paras" as the controls against professionals),
all the most important information could be obtained but with a
naturalistic control group as well. With some subgroups of clients
with problems we could compare trained "therapists" (discretely and
particularly AND CLEARLY trained) AGAINST reasonably selected "peer
counselors", NOT GRAD.-SCHOOL-TRAINED and offering to some extent
whatever treatment or support they might naturally provide (with some
necessary guidelines and minimal training). This would provide some of
the information we want from efficacy studies that are to show the real
effect of PROFESSIONAL therapy per se (and why we bother to send people
through long graduate training programs). Other studies with more
delicate sub-groups of clients (the vast majority, the rest that are not
reasonably amenable to the first sort of study) could be controlled
studies comparing clearly trained (and grad. school trained) professionals
against well-selected and reasonably trained, but still relatively briefly
trained, paraprofessionals -- there is nothing to argue against this for
MANY, MANY problems (THOUGH WITH THE POTENTIAL THAT "THERAPY"
WILL NOT COME OUT BETTER -- as was the case in 1979 with that fine
controlled study done then). In any case the research I and others propose
would allow for flexible duration of treatment; use of whatever
techniques may be useful for a particular client (and changing these when
necessary); multiple problems in the clients; and assessment of
improvement in many senses. This is 4 of the five things Seligman is
after. Also all the research improvements Seligman proposes could be
done. But doing it our way the studies ARE CONTROLLED (unlike Seligman's
proposal) *and* selection of clients need not be biased (same types of
clients are assigned to the two different treatment groups). Studies like I
outline are required if we are to see if "therapy" and PROFESSIONAL skills
are the factors in improvement and not other more common helpful
factors: common empathy in a well adapted listener and one who helps
you find your way by applying simple guidance techniques and/or simple
Why don't "therapists" think of better science such as I have outlined?
Well, the first thing cited is "ethical problems." In my debate with
Seligman he alluded to this vaguely but was not specific and
bowed out of the debate when I illustrated that the type of study I propose
is no more unethical (and likely LESS so) than the typical "efficacy"
studies done today (with "waitlist " controls or placebo control being
contrasted against professional treatment). (This argument has been
outlined at greater length in the "Ans. to FAQs". )
Another reason psychologists may not readily think of research like I have
described may be because the 'levels' of the independent variable
compared are complexes. There is not a simple single factor that varies
between the groups. BUT this is not required of science. We just need to
know what the differences in treatments ARE (OR CLEARLY HOW THEY
WERE ARRIVED AT) and they may involve multiple differences (ALL THE
DIFFERENCES MAY NOT BE KNOWN AND DIFFERENCES MAY DEVELOP, WE NEED
ONLY TO KNOW HOW THE 2 GROUPS WERE SET-UP DIFFERENTLY FOR
RELIABILITY OR REPLICABILITY -- I.E. OBJECTIVITY). You can then in fact
have very naturalistic comparison groups -- both experimental and
control. Yet another reason the sort of research I have strongly argued
for is not done (and I think this is a LIKELY one) is fear of negative
results. I am drawn to this conclusion because of the constant
unwarranted blindness to this consideration of this important kind of
research that has already had BOTH IMPORTANT and surprising results.
ALSO: The importance of this research for discovering where
professionals are most needed and where treatments need to be developed
(as I stated in the "Ans. to FAQs") CANNOT be overestimated. We are
denying clients through not getting the information we really must have to
II. Now the presentation of a study grossly misrepresenting itself and
what the results mean. A major case in point is: The Stein and Lambert
study in the Jour. of Consulting and Clinical Psyc. This study is a
disgrace. This is the study many professionals have recently made
reference to in dismissing the idea of paraprofessional and peer
counselors (a major conclusion that would *have to be* well-justified to
not deprive clients of easier access to care and more comfortable and
affordable care). I thoroughly reviewed this supposed "meta-analysis"
and the studies it examined and upon which the authors' conclusions are
The Stein and Lambert "meta-analysis", in the Spring, 1995 Jour. of
Consulting and Clinical Psyc., is a professional disgrace. *Most studies
cited* were *not on the issue they were supposedly addressing** in 2
senses. The issue they were trying to get at was the effectiveness of
trained professional counselors (or "therapists") VS. "paraprofessionals."
The first big problem is that MOST of the studies they examined (by far)
involved comparing EXPERIENCED professionals to INTERNS or counselors
in-practicum (also essentially professional, but not ones with a lot of
continuing experience and work in the field). These are not studies of
professionals vs. "paras." (One should also state that with more
experience professional counselors/therapists, bad ones have surely had
time to self-select out of the field, giving us one of several grave
confounds I shall discuss briefly). The second way the study was really
NOT of what the authors were supposedly addressing: the vast majority of
the studies also had this issue of trained vs. less trained (or whatever) as
an issue on the side (often something just examined in passing); i.e. most
of the studies were not really studies of the question being examined NO
MATTER HOW POORLY YOU DEFINE "PARAS." Finally there were only 3
studies that were both GOOD by normal standards and used measures that
could be considered objective. These were 2 in favor of the "paras" and
one showed a tie. In both the 2 best studies, actually looking at what
rational people would call "paras" and having this as the actual focus
(purpose) of the study, I believe the "paras" fared better. And then ALSO,
as I mentioned, I shall indicate below that there were great likelihoods of
FURTHERMORE AND FINALLY: The authors' conclusions from their
"meta-analysis," which were actually contrary to the facts when looked at
the way I did above are inexplicable. I can conclude only that these
researchers are totally incompetent AND/OR this was an effort at
subterfuge. For a more detailed look at the study beginning with this
matter, read on. :
Stein and Lambert in this Jour. of Consulting and Clinical Psyc.,
Spring, 1995 journal article (published by the APA) concluded from their
"meta-analysis" that grad. trained therapists yield modestly better
results in outcome measures from clients than paraprofessionals.
Confounds are a big issue and very much so here as I shall describe further
below. Furthermore, not only is there likelihood of serious confounds
that abound but no rating system for study quality was involved in their
meta-analysis. Often the better studies (including previous meta
analyses) indicated results contrary to what they reported for overall
In this Stein and Lambert review of past studies and
crude"meta-analysis", even in the SELECT group of studies that had
objective measures and supposedly did show some effect for more
TRAINED vs. LESS TRAINED, it is good to give people a realistic and
meaningful idea of the magnitude of those differences found. On objective
outcome measures (objective exit "tests") where differences were found,
the "effect size" was .2 (once the 1 outlier study of the 10 was thrown out
*as the authors themselves suggest*). THIS MEANS a one fifth of a
standard deviation difference on average between the groups (please see
the "FOOTNOTE" AT THE END)**. TO MAKE THIS MEANINGFUL: This is about
the level of difference shown between males and females (where males
and females differ at all) on *several* objectively measured
interpersonal traits, and on a number of scales male/female difference
are GREATER. IN THIS SPHERE this level of difference is NOT considered
impressive (certainly it is not considered differentiating); it is at most
about a quarter of the difference between males and females shown on
conglomerate scales set up to differentiate them. Another thing to note
w/r to these studies included in the Stein and Lamberts review of studies
I referred to above: with an s.d. of .31, 1 or 2 of these 9 studies likely
showed the "paras" doing better (i.e. due to variability in results amongst
the 9 studies -- and recall this is the select group of studies that showed
more than the typical outcome difference). AND I must add that in these
studies showing a difference with grad. training: These differences could
very well be due to confounding factors (BIG ONES): perceived status of
therapist, age of therapist, experience (a matter different from training)
and OTHERS! None of this was controlled. The only controlled study w/o
confounds showed untrained listeners equal to therapists for a BROAD
RANGE of college student problems.
Also remember these are group data and with just a .2 of a standard
deviation difference between the grad. trained and the "paras (their
definition)," a sizable number of the "paras" (in EACH of the nine
studies) were doing better than the professionals on average (act. just
slightly less true than the other way around).
Because some personal uncertainty still remained for me with respect to
these studies, I went back and looked at all the particular studies where
objective measures were involved. In 6 out of nine, the comparison was
actually between late stage grad. students (in practicum or interns) vs.
EXPERIENCED degreed professions. Obviously this is not the comparison
either I was interested in when I went to this article and is not the issue
S & L were supposedly out to address and supposedly addressing with this
study (again, we are really not looking at training, but experience). The 3
studies that remained using objective, typical
psychological measures of symptom change found no difference between
paras and profs. in 2 cases AND 1 study favoring the paras.
Furthermore, INDEED it is still true today that the best research available
on professional psychologists vs others, and the only controlled study, is
one that compared professionals to untrained individuals. This is the
Strupp and Hadley study, 1979. And here, it was shown that intelligent
good-listeners could help college students with a broad range of problems
at least as well as professional psychologists (Strupp and Hadley, 1979).
It is completely unacceptable that a study comparing professionals to
"paras" without major confounds has not been done. It is worse than if a
drug company did not do placebo studies. Worse because we do not know
that "paras" would materially or substantially provide anything different
than grad. trained psychologists.
In spite of the grave deficiencies, weaknesses of the studies presented,
and unjustified conclusions of the authors, still this S & L "meta
analysis" has been heralded and is the study on the basis of which
professionals have argued that there is a modest difference in outcomes
with clients when grad. trained therapists and "paraprofessionals" are
compared. Readers can see for themselves after going to the source (Stein
and Lambert in Jour. of Couns. and Clin. Psyc., Spring, 1995) that what I
have been able to say about this study is true and it is a mess. This is a
much worse than usual meta-analysis (many are very, very good and
useful). Most meta-analyses are summaries of studies that were on the
actual matter of concern. Again, here in the typical study included in this
meta-analysis, the primary focus of the study was not the question at
hand (not even: the effectiveness of more trained VS. less trained) but this
was either a secondary hypothesis of the study OR results "that were
almost presented as an aside." Again, in fact only 1 study included in the
whole report had a similar primary focus (still not identical to the
question at hand) and was controlled for confounds (this is the old Strupp
and Hadley study, '79; NO MORE RECENT ONES HAVE BEEN DONE). This study
showed no difference in counseling outcomes between trained
psychologists and totally untrained "nice guy" professors doing counseling
with college students with a BROAD RANGE of problems. To quote the
*authors of the study itself* (Stein and Lambert) on these matters:
"Readers familiar with the outcome research in this area are aware
that authors typically did not design their studies to primarily
investigate the effects of therapist training or experience. Indeed,
examining the relationship between training or experience and outcome
was usually a secondary hypothesis, or results were presented almost as
an aside. Thus, it appears that the investigation of the relationship
between level of training and outcome was not planned as carefully as
procedures designed to study the central hypothesis. For example, as
noted earlier in the article, the typical study did not adequately isolate
the issue of professional training from confounding variables. We are
aware of only one published study that has quite reasonably isolated the
ingredient of therapist training by controlling some of its inherent
confounds and correlates (e.g. age, status, perceived expertness,
interpersonal skills, etc.). This was the Strupp and Hadley's (1979)
classic study comparing male university professors, who were selected
because of their reputation among students as being approachable and easy
to talk to, and experienced male psychotherapists." (end quote)
It appears that while the issue of fully grad. trained therapists vs,
"paras" may not be a new one, THE RESEARCH WOULD BE. IT HAS NOT YET
BEEN DONE in "modern times"!! Also it is VERY likely that a number of the
major confounds, I cited as possibly present, probably were (and some I
didn't think of). Some (I won't say "a lot") of good and reasonable work
is yet to be done before we have a clue as to whether well-selected,
reasonably-trained and supervised "paras" do as well as clinical and
counseling psychologists with the terminal degree with the vast majority
of clients. The best evidence we have indicates that in general they will
probably do as well. There is some suggestive evidence from other
studies cited by Stein and Lambert that fully trained therapists may be
helpful with diagnoses and that dropout rates with "paras" become higher
only when more than 10 or 15 counseling sessions are required. This is
suggestive for the role of a new type of supervising clinical psychologist.
Still the full merit of reasonably selected and trained "paras" has not been
addressed. One wonders whether a major research issue will ever be
addressed when it is not in the vested interests of therapists. It is
How these authors concluded in this "meta-analysis" that graduate trained
therapists yield modestly better results in outcome measures on clients
than paraprofessionals is very hard or impossible to understand on a
reasonable basis. As I indicated, bad definitions, poor measures, and
confounds are a big issue (and very much so here). As I indicated, the
likelihood of confounds abound; and no rating system for study quality was
involved in their meta-analysis. Often the better studies (including
previous meta-analyses) indicated results contrary to what they reported
for overall conclusions. What their study was really about was
misrepresented and yet the conclusions are yet more misleading than this.
The "effect size" in the Stein and Lambert research for the studies using
objective measures was defined as follows: ((mean of more highly trained
ON THE OBJECTIVE MEASURE)-(mean of less trained ON THE OBJECTIVE
MEASURE)) divided by the STANDARD DEVIATION OF THE LESS TRAINED ON
THE MEASURE. I tried to gage the magnitude of this effect in meaningful
terms by assuming the s.d. shown by the "experimental" ("para") group
would be about that of the general population. (It is actually likely LESS
thus I'M INFLATING the actual "effect" the way I represent things, though
this is not certain). Anyhow, assuming s.d. of the para group equal to the
general population on the measures of symptoms involved, I argued that .2
standard deviation difference (more trained vs less) would not typically
be considered meaningful on the measures.
I do admit that it may be too much of an assumption to assume that the
standard deviation shown by subjects in the control group (para group)
is equal in magnitude to the standard deviation of the general population
on the outcome measures. Yet, again, if anything a measure of range
shown by a select group on a pertinent measure is at least typically
(though not invariably) smaller than that in the general population AND
would come out that way here, I think, unless the shift in therapy for
some was dramatic, while for others none or for the worse. Typically
they would start AND (equally treated) END more similar to one another
than those in the general population. With this in mind, as I've said, the
real denominator in the formula for "effect size" as they defined it would
be even less than the standard deviation of the population sample on the
test. And, this would mean the difference in outcome measures is likely
less than . 2 s. d. of the measure (generally speaking), that is, when
looking at what this would mean in the actual general population, on which
the s. d. of the measure is based. This would make the difference between
paras and profs. even less meaningful. The only thing to argue against this
is if you can find out that there was greater variability in the client
sample than in the general population.
Link to the SUPPORT SHOWN BY MAJOR FIGURES IN THE FIELD!!
Below is a different kind of TOPIC. The perspective below is to orient one to basic cognitive-developmental human ethology and provide a research outlook for studies in that area.
Below is an outlook I believe allows for continuous growth of knowledge in some basic areas of psychology. The heart or essence of it is "defining each behavior of interest in terms of the behaviors of the same organism surrounding it." This gives one a self-correcting mechanism in ones approach to understanding -- the most important contribution of classical ethology. Add to it the basic knowledge we have of emotions and emotional development and you can have an outline of a meaningful perspective on learning and meaningful concept of "learning" -- brad jesness
An Ethological Conceptualization of Learning:
Learning in terms of the interrelated development of basic capacities.
Every significant behavior change is now thought to involve learning. Learning and innate aspects of behavioral change are now conceived of as partners in the developmental and adaptational process (Gould and Marler, 1987). They are not even thought to be clearly separable at this point in our understanding of human behavior (Anastasi). Their partnership usually occurs in such an intimate and close time frame that they cannot be contrasted. With regard to the most significant behavior changes, such as stage shifts in cognitive abilities, one cannot see the great extent to which each is involved, and it is impossible to say which is most important: Is whatever "pre-wiring" we have most important or is it what's acquired -- that which involves interaction with the environment and at the same time between our basic "capacities" -- that's most prominent? These are serious questions. And so are the more detailed questions: What is the initial expression of the most important innate action patterns? When do innate action patterns appear? If they are not all present at birth (AND I BELIEVE THEY ARE NOT), how do they manifest themselves as they emerge during ontogeny? AND: What are the basic capacities (if any) that have relatively constant characteristics or similar interrelationships across development? Which types of capabilities most reflect that which is accrued via experience and with development and what is the nature of the changes undergone?
Learning, like other topics in psychology, concerns behaviors that have innate and species-specific characteristics. Learning is frequently said to be "constrained by innate factors," but as far as developmental questions are concerned, it is IN FACT DEFINED in large part by such factors (Johnston, 1981). And, as such, it is involved in all the most significant behavioral changes. Learning as a topic involves the most "microscopic" look at behaviors, in the wider discussion of processes of significant behavior change. Learning may be the most important topic by far, for environmentally-induced behavioral change certainly seems to be key to quality adaptation in all areas of responding.
Learning may be defined as changes in those adaptational processes susceptible to experience and due to changes in these processes occurring singly and/or in an interactive manner. There is no pure acquisition (reality does not just progressively impinge itself) and there are no arbitrary acquisitions. Acquisitions must be retained. Clearly there are innate and species-typical processes involved, and fortunately for the human behavioral sciences, general laws to be found.
It should not be surprising to find that it is impossible to discuss learning in any detail or with any generality without asking what basic processes are involved in the bit-by-bit behavioral acquisitions which characterize learning. How many types of processes are there and what are their basic natures? I will try to outline what I see as the basic types of processes, their basic character, and which aspects of the processes remain relatively constant and which change systematically, reflecting what in fact has been accrued.
First, the organism always has perceptual biases and response biases. These are interrelated and both change significantly during development.* These related processes precede [other] cognition and cognitive processes, including the major aspect of cognitive processing -- representation (to be discussed soon). The proper understanding of these processes (perception and response biases) can come only with proper definition. And, objective definition is obtained only when the environmental and behavioral context in which the important features of these processes occur have been specified. Behaviors (OF THE SAME ORGANISM) preceding and those following a behavior of concern must be identified. This will become more and more important with ontogeny and will be true of the other processes to be discussed as well.
In addition to having perceptual biases and response biases, in general, we have memory. Memory at first seems to be of the immediate and may thus be said to have just a short-term aspect. But with experience, the organism interacting in consistent manners with the environment will begin to respond to structure and systematic change in the environment. This shows recognition memory, and soon recall, both characteristics of long-term memory. This capacity, like short-term memory, is limited, BUT INDEPENDENTLY (Brainerd and Kingma, 1985). After some point, "processing space" for short-term memory little influences the processing characteristics of long-term memory, though it is also limited at any given stage of development (the matter of stages to be discussed soon).
This is not all that happens. New response characteristics emerge. As structures and occurrences are recognized, new aspects of stimuli are related or are related more consistently (i.e. reacted to in a "different way"). This is not arbitrary. This may be best viewed as determined by new "perceptual biases" and related response biases. The most significant perceptual shifts, I believe, are the first occurrence in, and that which sets into motion, a new developmental stage. Yet this kind of perceptual shift occurs only every so often with regard to any given set of related stimuli to which we respond (Fischer and Pipp, 1984). There are possibly as few as five stages of development in major response areas (Freud, 1965; Ginsburg and Opper, 1978; Jesness, 1985).** How are acquired behavioral adaptations guided in the mean time?
At this point we could type different sets of behavior and note the characteristics of their changes, BUT this would violate the standards we have set for objective definition of a behavior-of-concern. We will be better off considering the basic processes we already have and look for further features of these that determine behavior change. One factor has to do with the fact that development of long-term memory takes time. And, the way it develops may show phases. Most important: There are aspects of what we recall that are worth keeping conscious . Consciousness requires response time and uses the scarce resources of short-term memory and much affects other responding. I would say this phenomenon of consciousness occurs for either of two reasons: (1) Further stimuli which are novel or of different varieties must be noted
(and possibly, eventually recalled) and these are related to things already remembered (recognized or recalled) OR (2) things to be remembered in much the same WAY as past experiences (already remembered) will be encountered (i.e. similar environmental structures will be encountered (Griffin, 1981)). (Some of (1) and (2) is probably related to the fact that some stimuli impinge on us via less salient sensory modalities or through less salient combinations of modalities. These aspects of stimulation could become conscious later yet may still be related to some basically similar type of relationship we know (and can remember) when it has been found through other modalities.) The aspect of long-term memory of which we are at times able to be conscious is a good broad definition of representation . The nature of representation will change much during development and some of that of which one is conscious as a child will become aspects of awareness or totally automatic in the older child or adult. We still must include these aspects in our understanding of representation. We now need to ask what phases there may be in the development of representation, this important aspect of long-term memory and the most important capacity in significant behavioral change involving experience.
First: In a given type of circumstance (or "set of circumstances") it may take time to usefully retain and represent all the necessary static and dynamic aspects of the situation. To say this in more reductionistic terms: It will take time for all the stimuli of different salience to occur a sufficient number of times, given our perceptual/response biases, and time for them to be responded to consistently . An entire phase of development within every stage could be related to such developments AND, as indicated before, such may well vary in timing somewhat based on the salience of sets of stimuli involved in different circumstances. Second: Next, one's attending (and responding) selectively to certain aspects of immediate situations (ultimately related to perceptual/response biases) eventually may allow one to relate new things separated in space and time. This is another characteristic of memory and retention and eventually of representation. The latter may show two aspects: (1) an ability to imagine sequences of occurrences (the more important ones often involving your own behaviors or potential behaviors) and (2) an ability to see similarities across circumstances (Lucariello and Nelson, 1985). These two reciprocal aspects of memory development and representation can result in there being a second phase during each major stage of cognitive development. This too, for adaptive reasons (and for adaptive purposes), takes time. I do not have the space to speculate on the details here. In any case, all changes in representation will be manifested by systematic alterations in perceptual/response characteristics.
Now, finally, I believe one must discuss stages. The processes of memory and perception and the response biases and differences in stimulus salience, all already discussed, cannot (I believe) account for the progressive, hierarchical nature of development (Bowlby, 1982). Development has some invariant stages (descriptively speaking) in which some problems involving representation cannot be understood or cannot be understood reliably. Furthermore, it is just such reliability or consistency which is necessary for the further development of long-term memory processes, including representation. How does one get such consistency, adaptively, AND what is the parsimonious outlook? My answer is that we have stages, defined by new perceptual/response biases, emerging during ontogeny. Such perceptual shifts within an adaptive behavioral complex can have powerful effects indeed, and especially so when it is proposed that the changes in learning also involve progressive memory developments (with phases). The perceptual biases, as indicated before, may differ from one set of related stimuli to another and thus the timing of stages may vary to a degree for different types of responses. It would also seem appropriate to look at this in terms of the timing of aspects of stages. Although what the "sets of related objects" are has not been well delineated and how the timing of developments may vary between them is not clear, there are indications of some common synchronies and some general (overall) stages seem to be defined by these (Corrigan, 1983). In any case, the perceptual biases trigger a series of effects, given some of the more consistent characteristics of memory, and these result in a new level of representation and consciousness of new problems. All this allows for another series of developmental changes, such as already described. It should be clear from the outline of ontogeny given above that a general principle applies to learning: Behavioral development involves selective adaptation and eventually consistency of response. A variety of experiences will, in the normal course of adaptation, all be encountered even as consistencies are found.
I believe one can point to two aspects of behavior (broadly speaking), spoken of above, that change most in their characteristics during development: (1) the set of perceptual/response biases operative and (2) the elaborateness and precision found in representation. The changes in these capacities are systematically related. A MAJOR CONSISTENCY throughout development seems to exist with respect to short-term memory. While this type of memory may vary with development by 20-30% in quantitative capacity in terms of the number of "chunks" that can be dealt with "deliberately" (increasing with development), this change does not seem tremendously significant (Case et al., 1982; Dempster, 1981). It is clearly not much that's most salient that we can process at one time even late in development. This is especially startling given the large quantitative differences over development in the detail we respond to and in the length of sequences of responses we exhibit. "Quantitative capacity" may be roughly synonymous with what's often viewed as "working memory", if this is defined as that that we are conscious of in a given situation and at a given moment. But this has little to do with information processing overall. There is always awareness beyond consciousness (in the narrow sense) in significant situations and much processing of long-term memory (some of this related to representation) occurs outside normal awareness.***
Other characteristics of memory change in a manner adaptively congruent with changes in perceptual/response biases and with the changing nature of representation during each stage or phase. These changes should have less specific effects on significant learning and should be of a less radical nature. These changes will be definable in terms of the effects they have on responding.
* I would say at the outset that I use an unconventional definition of "perceptual biases", but this would be misleading because I believe that modern conceptualizations of the field of perception are arbitrarily (unsystematically) constrained.
**With reference to Piaget's theory, I should note that I consider his 2 phases of the Preoperational Period to be stages in the same significant sense as the S-M Period, the C-O Period and the F-O Period are stages.
***Of course psychologists may develop awareness and consciousness of things not normally subject to such through unique and sustained observations. Obviously, much of this will be awareness, etc. of things as they are for the child during development and how this fits into the "bigger picture".
Anastasia, A. Heredity, environment, and the question
"How?" Psychological Review , 65, 197-208.
Bowlby, John (1982). Attachment , 2nd ed. New York: Basic
Brainerd, C.J. and Kingma, J. (1985). On the Independence of
Short-Term Memory and Working Memory in Cognitive
Development. Cognitive Psychology , 17, 210-247.
Case, R., Kurland, D.M., and Goldberg, J. (1982). Operational
efficiency and the growth of short-term memory span.
Jour. of Experimental Child Psychology , 33, 386-404.
Corrigan, R. (1983). The Development of Representational
Skills. New Directions for Child Development , 21,51-64.
Dempster, F.N. (1981). Sources of Memory Span Differences.
Psychological Bulletin , 89, 63-100.
Fischer, Kurt W. and Pipp, Sandra (1984). Processes of
Cognitive Development: Optimal Level and Skill
Acquisition. In: R. Sternberg, (Ed.), Mechanisms of
Cognitive Development . New York: W.H. Freeman & Co.
Freud, Sigmund (1965). Three Essays on the Theory of
Sexuality. New York: Avon Books.
Ginsburg, H. and Opper, S. (1978). Piaget's theory of
Intellectual development , 2nd ed., Englewood Cliffs, N.J.:
Gould, James L. and Marler, P. (1987). Learning by Instinct.
Scientific American , January.
Griffin, Donald R. (1981). The Question of Animal
Awareness . New York: Rockefeller Press.
Jesness, B. (1985). A Human Ethogram ... , Key Chapters and
Sections. Indexed in Resources in Education, Nov.
Jesness, B. (1986). Info.-Processing Theories and Per-
spectives on development ... . Indexed in Resources
in Education, May.
**THE LAST TWO SOURCES SHOULD BE READ TOGETHER.**
For a few important editorial corrections, go to THIS LINK . These 2 papers
are now available from ERIC as pdf documents -- for free (the last link also
gives links to copies of pages that are illegible in ERIC pdfs). AND:
See THIS LINK to get to the pdfs, from the ERIC Document Collection.
Johnston, Timothy D. (1981). Contrasting Approaches to a
Theory of Learning. The Behavioral and Brain Sciences ,
Lucariello, J. and Nelson, K. (1985). Slot-Filler Categories
as Memory Organizers for Young Children. Develop-
mental Psychology , 21(2), 272-282.
Instantly make a web page on your own site, which runs the
Jesness Peer Counselor Effectiveness Predictor just by
Copying the following lines of html code, Paste them into
NotePad or SimpleText. and then simply Save the page As
peercounsEP.html (with the Save As Type set to "All Files").
Then, just simply upload the web page to your site (be sure
to leave the line crediting the source of the program).
<meta HTTP-EQUIV="Content-Type" CONTENT="text/html; charset=iso-8859-1">
Jesness CPI-based Peer Counselor Effectiveness Predictor
<applet name="TestApplet" code="Applet2.class" width="720" height="600"
<center>From <a href="http://cyberper.cnc.net">http://cyberper.cnc.net</a><br>
For the Main Bibliography, see
The data from the successful pilot study is available upon request, from the tool's author.</center>
OR, for your own desktop copy:
CLICK HERE to
download your own copy of the Peer Counselor
Effectiveness Predictor (based on CPI scale scores and all relevant
research). If you find the program useful, please thank me by
providing a link to http://cyberper.cnc.net . Thanks.
Try the new Real / Ideal Self Checklist, HERE
-- cheers, brad jesness, M.A.,
former college psychology and counseling instructor