The Analytical Laboratory, 1938-1976

by William Sims Bainbridge
Analog, January 1980, Vol. C. No. 1
(Fiftieth Anniversary Issue) pages 121-134.

 Contents:
Introduction
Analyzing the Laboratory
Honor Roll of Authors
The Long and the Short of Science Fiction
New Analysis of the Authors
Conclusion
Note and References

Introduction

From March 1938 through October 1976, stories in every issue of Astounding Science Fiction, renamed Analog in 1960, were rated in a readers' poll called the Analytical Laboratory. An incredible amount of fascinating literary data lies buried in the 464 "Labs" that were published, covering twenty-five hundred fiction items. Half of these were short stories, and a third were novelettes. The remainder consisted of the most influential pieces of fiction, 70 "short novels" published whole in single issues, and 133 serialized novels published in a total of 370 installments. Since each installment was rated separately by readers, we will count them separately here. Included in these large numbers are many of the most popular works of science fiction ever written. This article will show how we can reanalyze the Labs to answer a variety of questions: Which authors were the most popular? Does the length of a story affect its popularity? Was the Laboratory biased against authors of some kinds of science fiction? Can we chart the ups and downs in an author's career?

For the first two and a half years, the Lab merely listed the stories in order, from the most popular in first place down to those near the bottom. The Lab for October 1940 introduced a more precise system. Votes for each story were tallied. Each first-place vote gave the story one point; second place gave two points; third place three points, and so on. The total number of points for each story was added, then divided by the number of people voting on that story. For the first fifteen years, the Lab was just used to express reader opinions and guide the editor in deciding which authors to emphasize, but from 1953 onward, the authors that came out on top were given a cash bonus.

The impulse to analyze science fiction scientifically has gripped many readers over the years. For example, John A. Leiter, an Oregon attorney, quantitatively expressed his personal opinions about authors and their fiction, in a letter published in the August 1933 issue of Wonder Stories. Leiter rated stories on a scale of 1 to 10, and came to the grand conclusion that Wonder Stories averaged 27 percent superior to its rival, Amazing Stories.

When John W. Campbell, Jr. became editor of Astounding in 1937, this magazine had already taken a lead in the field, but Campbell wanted to improve both its quality and popularity. One of his first decisions was to restore Brass Tacks, a general letter department, in the November 1937 issue. Six months later he wrote, "A magazine is not an autocracy, as readers tend to believe, ruled arbitrarily by an editor's opinions. It is a democracy by the readers' votes, the editor serving as election board official. The authors are the candidates, their style and stories their platform." (April 1938:125) The first Analytical Laboratory was published in the following issue, rating the top six stories that had appeared in March. Campbell explained that the Lab was a supplement to Brass Tacks: "Since we can't print all the letters -- or even a large fraction of them -- we are going to print the results." (May 1938:160)

Other editors eventually copied Campbell's Lab. From its very beginning in 1946, the British magazine New Worlds has carried a readers' poll called The Literary Line-Up. In the 1950s Robert A. W. Lowndes published a poll called The Reckoning in his magazines Dynamic Science Fiction, Future Science Fiction, The Original Science Fiction Stories, and Science Fiction Quarterly.

Two readers, Walter A. Carrithers and Dennis Donahue, have attempted to expand the scope of the Analytical Laboratory. In the November 1943 issue of Astounding, Carrithers reported his analysis of 1360 Brass Tacks letters published over the previous ten years. He counted one point for each favorable mention of a story, 2.5 points for an "all time choice" rating in one of the letters, and minus one point for every disparaging opinion. Jack Williamson's novel, The Legion of Space, won first place for the decade, and E. E. "Doc" Smith's The Skylark of Valeron came in second.

Donahue's report, published in the December 1972 letter column, anticipates the analysis carried out in this article. He calculated average point scores for an accidental sample of stories by ten authors. First place went to Lloyd Biggle, Jr. on the basis of only three stories, and Donahue's analysis was not systematic enough to give reliable results. It is not good enough merely to record Lab scores and rank them or average them to get an overall rating of the authors. Before I can report my own findings, I must explain the nature of the Analytical Laboratory and show how it can be analyzed correctly.

Analyzing the Laboratory

Let us start with a specific example. I have chosen the Lab for a very special issue, November 1949. Filled with stories by the greatest authors, this famous issue is the hero of its own science fiction story: The November 1948 issue contained a letter from Richard A. Hoen rating the stories in the November 1949 issue. There are two possible explanations for this remarkable Brass Tack. Either Hoen's letter was delivered to 1948 by time machine, or Campbell puckishly contrived to bring Hoen's fantasy to life. In either case, the November 1949 issue was duly rated by other readers, resulting in the Analytical Laboratory given in Figure 1.

Place StoryAuthorPoints
1 Gulf (Part 1)  Robert A. Heinlein 1.38
2 And Now You Don't (Part II)  Isaac Asimov 2.33
3 What Dead Men Tell  Theodore Sturgeon 3.00
4 Final Command  A. E. van Vogt 4.09
5 Over the Top  Lester del Rey 4.90
Figure 1: The Analytical Laboratory for November 1949.
This poll rates one of the most famous issues of Analog's
predecessor, Astounding Science Fiction.

Five stories are listed in order, from the most popular to the least. In first place is installment one of Gulf by Robert A. Heinlein, with a point score of 1.38. Let's review how Campbell calculated this. If every reader had put Heinlein in first place, his point score would have been 1.00. Perhaps only eight people voted, five giving Heinlein first place, and three giving him second. Then Campbell would have figured the average score as follows:

Or, perhaps the vote was five hundred for first place and two hundred for second. The result would be the same. It is possible that some readers put Heinlein in third place. We do not know what the actual numbers were, but we can assume they were large.

Both in the place listings and in the point scores, as in the game of golf, a low number is a good rating, while a high number is bad. This seems simple enough. But there are at least four reasons why we cannot blithely add and divide the place and point scores in an overall analysis of the authors and their twenty-five hundred stories.

The first problem is that the Analytical Laboratory frequently fails to report votes on the least popular stories. In addition to the five items listed in Figure 1, the November 1949 issue also contained, "Finished, a short story by L. Sprague de Camp. We can easily add it to the list, putting it in sixth place, but there is no way to know how many points it received.

The second problem was mentioned by Campbell: "Not every reader letter casts votes on all the stories; thus the total number of votes cast for a particular story may not equal the total number of ballot letters." (October 1943:29) Probably, people will tend to skip stories they dislike. This means that the point scores for the least popular stories will be lower (better) than they deserve to be.

The third problem is that Campbell used an odd convention for expressing tie votes. For example, A. E. van Vogt won first place in the December 1948 issue, while Poul Anderson and Eric Frank Russell tied for second. In the Lab, Campbell gave second place to both Anderson and Russell, and awarded third place to a story by H. B. Fyfe. More properly, Fyfe should be in fourth place, since three stories got better ratings than his. Since Anderson and Russell were battling for second and third place, we should put each of them in "2.5" place. If many readers expressed tie scores the way Campbell did, then again some lower-rated stories would wind up with incorrectly good scores.

The fourth and most important problem comes from the fact that different issues contained different numbers of stories. Campbell recognized that this fact made it very difficult to compare from one issue to another. One time he commented, "The June issue carried seven stories besides the article; this means that point-score votes ranged from one to seven -- and made point scores tend to run high. That's somewhat unfair, in a way -- a third-place story or fourth-place story in such an issue has met and surpassed more competition, yet gets a tougher point score than the rearguard item in a five-story issue. Some day all things will be perfect -- and a completely fair system of reporting may be worked out." (September 1943:48) This article will use specially-designed correction formulas to defeat these four problems and make it possible to translate all scores to a single, uniform scale.

The place orderings, which exist for all 464 issues, can be converted to a uniform scale with a simple and mathematically sound formula. This was derived from probability logic by Toshio Yamagishi, a graduate student in my sociology department. In outline, the thinking is as follows. Suppose all twenty-five hundred stories were ranked from best to worst, in a single huge Lab. Now let Chance play the role of editor, selecting stories at random to fill the 464 issues. Finally, assume that stories within each issue were rated by a regular Lab, so we know which one is the most popular, which is second in the issue, and so on. Mr. Yamagishi pointed out that we can derive a statistical formula that lets us predict the probability that a story in a given place in an issue of given size will come from any given level in the ranking of 2500. From this rather complex mathematical expression, he derived Formula I, a very simple equation that gives the expected rank of a story. While the above logic is absurd if applied to any one actual issue, it does describe adequately the average of any randomly chosen group of issues.

X is the desired result, the story's standardized rank in a scale that can be used to compare from one issue to another. The letter P stands for the place the story achieved in the Lab for its issue, while m is the total number of stories in that issue, whether listed in the Lab or not. N stands for the number of steps in the standardized ranking scale, assumed to be a large number. In this article, I have let N equal 1000. Formula I divides the entire range of the ranking scale into equal parts, their number depending on how many stories appeared in the issue. November 1949 contained six stories, so Formula I divides the thousand ranking steps into sevenths. Heinlein's first-place story gets an estimated rank of 143, because 1000 x 1/7 = 143. Asimov's story, in second place, receives 286, and the others follow in order, 429, 571, 714, and 857.

What would have happened if de Camp's story had not been published, if the issue had contained only five stories? Then the thousand ranking steps would have been divided into sixths, and Heinlein's story would have received an estimated rank of 167. Like the Analytical Laboratory place and point scores, this new scale of 1000 assigns a low number to a popular story, and a high number to an unpopular one. Thus, Heinlein's story gets a better rating in an issue of six stories than in an issue of five stories. This makes perfect sense -- presumably the competition is tougher the more other stories there are in an issue. Formula I gives the following estimated ranks out of 1000 to the first-place stories in issues of from three to ten stories: 250, 200, 167, 143, 125, 111, 100, 91.

Probability logic could be applied to the point scores as well as to the place listings, deriving expected point distributions for each place in each sized issue. This would involve many tedious estimation procedures based precariously on small samples of data. I have chosen to use a cruder but still serviceable method of approximation. I start with a basic observation: The average point score for a given place in all issues of a given size is equivalent to the place number itself. For example, it turns out that the average point score for first-place items in the 82 four-item issues is 1.64. Formula I tells us that first place in a four-item issue earns an estimated rank of 200. Therefore, we can let a point score of 1.64 equal an estimated rank of 200. The average for second place is 2.24, so we let this equal 400, and so on.

This is fine for those rare stories that have exactly average scores, but what about all the others? Here I make a slightly wobbly but cogent assumption: Scores in-between can be estimated using a simple mathematical function derived from the distribution of average scores. I was prepared to try various logarithmic curves, but I was pleasantly surprised to discover that straight lines fit the data quite well. The approximation was carried out separately for each different number of stories in an issue, and involved deriving equations for what are called regression lines (or trend lines). The overall error, the amount to which the straight lines missed the average scores, was only about 1.5 percent. Formula II gives the equation for converting any point score to an estimated rank in a scale with 1000 steps.

X is the desired estimated rank, while S is the story's point score, and a and b are constants derived from my regression analysis for each size of issue. Figure 2 gives the list of constants, so anyone may use this formula in their own Lab research. Of course, there are so few issues with 3, 8, or 9 stories that the estimates for these cases will be especially crude. Because real issues vary greatly in quality, Formula II will sometimes give a result less than zero or greater than a thousand. But when stories with such extreme estimated ranks are averaged in with others, these wild variations tend to wash out. Formula II is compatible with Formula I, so when a Lab fails to give a point score to a story, we can use the value from Formula I instead.

Number of
Items in
the Issue
Number
of Such
Issues
Constant
  a
Constant
  b
31      1.1  0.002
482      1.15  0.00265
5202      1.33  0.00324
6114      1.62  0.00356
729      1.48  0.00484
84      2.2  0.0033
91      1.7  0.005
Figure 2: Constants for Use In Formula II.
This table lets the reader do his own Lab
research using both our conversion formulas.
To convert the point score of any story
to our 1 000-step scale, simply plug the
score and the appropriate constants from
this table into Formula II.

November 1949 was indeed an unusual issue. Despite the heavy competition, Heinlein's 1.38 score was much better than average, and it translated through Formula 11 to an estimated rank of minus 67. Asimov gets an even 200, somewhat better than the 286 estimated by Formula I. The other scores go: Sturgeon = 388, van Vogt = 694, and del Rey = 921. If my approximation procedures are any good, Formula II gives a more precise estimate than Formula I, because it makes use of the much greater information carried by the point scores, compared with the rough place listings.

Honor Roll of Authors

Using both Formula I and Formula II, I calculated the average estimated rank of all fifty-three authors who had published ten or more stories in the 464 issues covered by the Lab. Figure 3 lists these writers, along with the number of stories, their average year of appearance, the average estimated rank based on the place listings, and the average estimated rank based on the point scores. As in a regular Lab, the authors are listed in terms of their point scores, from the most popular to the least. Two things should be mentioned about these estimates. First, the two estimates tend to agree with each other, although some differences result from the greater sensitivity of Formula II. Second, the average for all stories over the thirty-eight years is 500, according to either formula, so all the authors below Silverberg are below average.
Figure 3: Honor Roll of Fifty-Three Authors.
This table gives the average estimated popularity
of every author who published ten or more
stories ranked by the Analytical Laboratory.
The best-loved authors are at the top.
     Number   Average   Average 
     of   Date   Estimated Rank 
     Fiction   of   "Place"  "Points"
Rank   Author  Items   Publication   Listings  Scores
 Anson MacDonald 101941 21098
 Robert A. Heinlein 251947 228145
 E. E. "Doc" Smith 131944 244190
 Jerry Pournelle 111973 280265
 A. E. van Vogt 591944 348298
 Harry Harrison 321966 321316
 Lawrence O'Donnell 111947 330323
 Frank Herbert 281963 381329
 Poul Anderson 671960 348332
10  Hal Clement 291953 315340
11  Jack Williamson 191944 348343
12  Clifford D. Simak 391949 356350
13  Isaac Asimov 451950 391351
14  H. Beam Piper 201957 318351
15  Stanley Schmidt 121972 363362
16  David Gordon 111959 372377
17  Raymond F. Jones 311949 390378
18  James Blish 121956 380386
19  Gordon R. Dickson 431965 414387
20  James H. Schmitz 391964 380390
21  John T. Phillifent 111968 410391
22  Eric Frank Russell 451951 403397
23  Randall Garrett 321961 372405
24  Walter M. Miller, Jr. 101952 408410
25  Mack Reynolds 481964 410428
26  Murray Leinster 401953 419432
27  Lester del Rey 241944 442433
28  Lewis Padgett 351945 437434
29  Fritz Leiber 141946 452439
30  L. Ron Hubbard 231944 443441
31  E. B. Cole 151957 428456
32  Theodore Sturgeon 231945 423457
33  L. Sprague de Camp 271946 456460
34  Katherine MacLean 101959 427472
35  Robert Silverberg 151961 472480
36  Malcolm Jameson 281942 520510
37  George 0. Smith 341949 529520
38  Christopher Anvil 731964 556542
39  Nathan Schachner 161939 548564
40  Ross Rocklynne 161942 555579
41  Theodore L. Thomas 101962 619595
42  Robert Chilson 121970 583620
43  Algis Budrys 221957 640636
44  Jack Wodhams 241969 610644
45  A. Bertram Chandler 191952 608647
46  Walt & Leigh Richmond 111965 634667
47  Lee Correy 101956 710669
48  H. B. Fyfe 201953 657681
49  P. Schuyler Miller 121942 717683
50  Harry Walton 111942 742723
51  W. Macfarlane 141967 725732
52  Lawrence A. Perkins 111969 725747
53  Frank Belknap Long 111945 829805

A glance at this big table shows that Robert A. Heinlein, "Dean of Science Fiction Writers," is in second place. What author could possibly be more popular than Heinlein? The answer is: Heinlein himself! "Anson MacDonald" was one of Heinlein's pen names. There are other pen names on the list. "David Gordon" is a pen name of Randall Garrett. "Lawrence O'Donnell" and "Lewis Padgett" are both pseudonyms for the collaboration of Henry Kuttner and C. L. Moore. "Clement," "Anvil," and "Correy" are also pen names, but their owners are not represented by other names on the list. There are several surprises in Figure 3, but I will leave these discoveries to the reader. There is much to contemplate and debate in the table, but I will turn to the question of how mere length influences popularity of works of fiction.

The Long and the Short of Science Fiction

Over the years, Campbell mentioned several factors that might influence the popularity of a story, and once suggested that the second episode of a serial might have suffered because readers forgot characters and plot details over the month since the first episode. (June 1955:118) This suggests the possibility that later installments are less popular in general than the first installments. Figure 4 graphs data that support this idea. It shows the average popularity of serial installments, including the one-installment "short novels." Frank Herbert's novel, The Prophet of Dune, was the only work that ran for five installments. I dropped the middle episode, and included this long novel in with the 19 four-installment novels. Sixty-three serials had three installments, fifty had two, and there were seventy short novels. The graph shows that later installments tended to be less popular than first episodes. Also, serials of three or four episodes, really full-length novels, were of equal popularity, while two episode novels and short novels were significantly less popular. This suggests that popularity depends on the length of the fiction, as well as on the skill of the author.

Figure 4: Popularity of Serial Installments.
The first installment of a serial tends to be more
popular than the second, third, or fourth installment.
Perhaps, readers forget plot and character details
over the month's gap between episodes, and others
may find it confusing to begin reading in the
middle of a story. The chart also shows that full-size
novels of three or four installments are more
popular than shorter novels, even in the first
installment.

Campbell commented on the length factor several times. "One of the problems inherent in science fiction is that each story actually is a brief glimpse of an alien world-scene. The longer the story, the more chance the author has to give a feel of reality -- a texture of living fabric -- to his world-picture. Result: a longer story, all things -- and authors! -- being equal, will have more satisfying effect for the reader." (June 1956:72) Of course, it may simply be that readers best recall those stories that took longest to read, subconsciously multiplying the enjoyment experienced per page times the number of pages to arrive at a total impression. Perhaps this is partly true, but Figure 4 shows something more subtle. First installments of two-episode stories and short novels rank much lower than first installments of three-episode and four-episode novels. When the readers rate these opening installments, they have not yet read the concluding parts of each work. Apparently, long fiction has a special quality that emerges even in the first few chapters. When an author writes a long novel he probably invests more effort in planning and characterization, so that even the first part of a long novel conveys more vivid images than an equally long segment of a shorter work.

Another time, Campbell explained: "Generally, the longer a story is, the more chance the author has to work out his background ideas, characters, and plotting. Serials generally take first place, primarily because the author can do a better job. Unlike here-and-now-stories, science fiction must describe even the common things of life -- life in the story environment. More space gives more chance for that. The result is that there are very few long-remembered, "classic" short stories, a few novelettes, but many much-mentioned serials." (July 1946:122) To test this idea on all kinds of fiction, I tabulated place distributions for all fiction published in the 187 five-story issues that contained no Lab ties. Figure 5 gives the results.

Figure 5: Percentage of Four Kinds of Fiction Achieving
Each "Place" In 187 Five-Story Issues.

Long fiction has a tremendous advantage over short fiction.
This table summarizes Lab ratings of 935 stories and serial
installments, showing that the shortest works almost never
achieved high popularity.
SerialShortShort
PlaceInstallmentsNovelsNovelettesStories
170%51%20%2%
218%40%42%5%
310%9%24%22%
42%0%11%33%
50%0%3%38%
Total100%100%100%100%
Number
of Items
14535294461

The pattern is quite regular. Serials beat out short novels which surpass novelettes which win over short stories. Indeed, the short stories are crammed into the last three places. Figure 5 shows that the length factor is really very powerful. Since length of fiction makes such a difference, we should reconsider Figure 3 and its estimates of popularity for the authors. Some authors may write huge, dull novels that get good ratings simply because they are big and, therefore, memorable. Other authors may create marvelous jewels of short stories, which have less impact on the swift-eyed readers. Figure 3 is entirely valid, so long as we understand that it measures the over-all impact of each author rather than the quality of writing page-for-page. We need an alternative estimate of popularity that removes the powerful influence of length of fiction.

New Analysis of the Authors

To arrive at a new, length-corrected measure of popularity, I sorted all twenty-five hundred pieces of fiction into the four basic length categories: serials, short novels, novelettes, and short stories. I then arranged the items in each group in order of the estimated rank based on point scores. This gave me the equivalent of four huge Labs. I applied Formula I, calculating new estimated ranks on the 1000-step scale. Since these rankings were calculated for each type of fiction separately, the effect of length of fiction was largely eliminated. The number of items in each of the four sets ranged from 70 to over 1200, so Formula I gave much more precise estimates than when used with regular Labs. Figure 6 is a map of these new popularity ratings.

Figure 6: Popularity Map of Fifty-Three Authors, Correcting for Length of Fiction. Each circle represents one author. Open circles are authors who held their positions or even lost ground when we switched to a length-corrected measure of popularity. Solid circles represent authors that gained five or more places in the ranking.

The vertical dimension of Figure 6 puts the best-liked authors at the top, and the least-liked at the bottom. Authors on the right side wrote a high proportion of short fiction, while authors on the left specialized in long works. Open circles represent writers who lost ground from the ranking in Figure 3, or barely held their ground. Solid circles are authors that rose five or more places in the ranking. As expected, the authors that rose significantly in the new ranking tended to write a good deal of short fiction.

An unfortunate effect of thirty-eight years of Analytical Laboratories may have been to downgrade short stories in favor of vast epics, thereby slighting the genius of some very fine authors. Close examination of Figure 6 will help redress the balance. The most spectacular rise was 27 places, achieved by Malcolm Jameson who went from 36th to 9th. Lester del Rey zoomed up 21 places, while other big gainers were Gordon, Russell, Walter Miller, Leiber, Leinster, Padgett, MacLean, Sturgeon, Silverberg, Anvil, and Correy. Each of them gained ten or more places in the ranking. Four authors dropped more than twenty places: "Doc" Smith, Harrison, Clement, and Piper. Despite the fact that he seldom wrote short stories, MacDonald-Heinlein did not budge from first-and-second place.

Our final use of Lab statistics will be to chart the changing popularity of three authors throughout their Astounding-Analog careers. I have chosen A. E. van Vogt, Poul Anderson, and Isaac Asimov because they are the best known of the most prolific writers. I arranged each man's stories from earliest to latest so we could see the trends over time. If I just graphed the raw data, we would have a bewildering tangle of zigzags, so I did two things to smooth the curves out. First, I used the length-corrected popularity estimates on which Figure 6 was based. If we did not correct for length of fiction, the line on the graph would hop up and down wildly as each author switched back and forth from long novels to short stories. Second, I further damped out short-term variations by calculating seven-point moving averages. A "moving" average does just what the name implies -- it moves. Each point does not really represent the popularity of a single story, but of that story averaged in with the three that came before it and the three that come after it. Thus, the height of the line at X = 4 is the average score for stories 1 through 7. The height at X = 5 is the average for stories 2 through 8, and the height at X = 6 is the average for stories 3 through 9. Because we need seven stories for each average, we can't calculate values for the first three and last three stories by each author. Figure 7 shows the careers of van Vogt, Anderson, and Asimov.

Figure 7: The Careers of van Vogt, Anderson, and Asimov. These charts are like stock market graphs, showing the ups and downs in the Astounding-Analog careers of three of the best-known and most prolific science fiction writers.

We can read these lines just like the ones on stockbrokers' graphs indicating the ups and downs in the stock market. Van Vogt's graph shows a tragic pattern. He begins very high, and rises slowly to a marvelous crest that begins to turn downward at the end of 1943. A gradual decline steepens into a precipitous fall, halted only briefly, that drops into a chasm in 1946. A recovery over the next two years restores only a third of the original loss, and van Vogt fades until his last story in 1950.

Anderson's pattern is quite different. It depicts a stalwart writer ready to battle back from adversity. He starts in the late 1940s just at the 500 average, and quickly rises to the 300 level. He holds a plateau, until suffering a terrible slump around 1958. He struggles back up to his former popularity, then slips back to begin a steady rise that continues until the end of the period covered by the Lab.

We see yet a third pattern in Asimov's graph. He starts at a very high level around the year 1940, drops quickly, then recovers to the 200 level. A steady decline sets in, taking him down below 600 in 1954. His final recovery is not. as simple as it appears on the graph. After publishing in Astounding-Analog quite regularly, Asimov was completely absent from its pages from 1956 to 1968, and only his last two stories, in 1972 and 1976, received really good ratings. The overall trend of Asimov's line is downward. Just as van Vogt vanished from Astounding after years of decline, so did Asimov, devoting himself instead to a splendid career of popular science fact writing. We cannot say for sure that Asimov was driven out of science fiction by a declining popularity, and only he can tell us if he experienced his career in this way. In fact, it takes a close reading of the Labs to discern the negative trend. The estimates reflected in Figure 3, which have not been adjusted for length of fiction, do not show it, but display a very shallow rise. The reason is that Asimov shifted from short to long fiction over his career. Sixty-four percent of his first 22 Astounding pieces were short stories, but only 18 percent of the last 22. While Asimov's short stories were rated higher than most other authors' shortest works, his long fiction was rated near the average for novels and novelettes. One of the most remarkable facts about Asimov's career is that he has established himself as possibly the most famous contemporary science fiction writer, despite the fact that most of his fiction was written decades ago and did not receive consistently favorable ratings.

Conclusion

This article has shown how data from thirty-eight years of Analytical Laboratories can be standardized and used to answer many questions about the popularity of authors and types of fiction. Despite our many findings, we have not exhausted this vast store of information. Far from it! Many projects remain to be done, several of them combining the Lab data with other facts and judgments. For example, one could read the more than twelve hundred short stories in the collection, coding each of them according to its style and content. Then we could chart the changing popularities of the different categories. Are stories about psi and ESP really popular in the fifties, or are they common only because John Campbell encouraged them? Do robots rise and fall over the years, or are they perennial favorites? Are there trends in the popularity of pessimistic stories, or triumphant stories, or politically conservative stories, or erotically liberal stories, or indescribable stories? The opportunities are not endless, but they can keep us busy for a good, long time.

Note and References

Statistically-minded readers will recognize that the Analytical Laboratories, and several portions of this article, treat ordinal data as if they were interval data. This is most obvious when we calculate averages from rank-order data. Since all our information comes from averages calculated in the Labs themselves, I have felt we must assume our data could be treated as if they were interval. Certainly, the presentation of our results is made much easier. But Formula I is not based on this assumption; nor is Figure 5. For a reference on some aspects of this problem see: "The Level of Measurement and Permissible Statistical Analysis in Social Research," by Gideon Vigderhous, Pacific Sociological Review, Vol. 20, No. 1, January 1977, pages 61- 72.

Bainbridge, William Sims, The Spaceflight Revolution, Wiley-Interscience, New York, 1976.

Bainbridge, William Sims and Murray Dalziel, "New Maps of Science Fiction," Analog Yearbook, Baronet, New York, 1978, pages 277-299.

Bainbridge, William Sims and Murray Dalziel, "The Shape of Science Fiction," Science-Fiction Studies, Vol. 5, July 1978, pages 164-171.

McGhan, Barry, Sciencefiction and Fantasy Pseudonyms, Misfit Press, Dearborn, Michigan, 1976.

Rogers, Alva, A Requiem for Astounding, Advent, Chicago, 1964.

Tuck, Donald H., The Encyclopedia of Science Fiction and Fantasy, Volumes 1 and 2, Advent, Chicago, 1974, 1978.