Page 3 of 5 FirstFirst 12345 LastLast
Results 21 to 30 of 47

Thread: "...the field has real standards of scholarly validity."

  1. #21

    Join Date
    Dec 2001
    Location
    NJ
    Posts
    8,484

    Re: "...the field has real standards of scholarly validity."

    Quote Originally Posted by BetterSense View Post
    As someone who regularly reads scientific papers, I'm more concerned about the technical quality of the papers. I can't understand how a publication that cost millions of dollars in funding to produce, which took some phd student 6 years to finish, and which somehow got accepted to a prestigious journal, manages to make it all the way to publication without being proofread, edited, or competently typeset.
    Surface be damned, content matters and so does quality control.

    When I was a grad student we had to take a prelim in an outside area. I chose geography and so did one of my classmates. In one of the geo courses we both took we read a paper. Published in a good journal. The key derivation in it had a mistake, most of the paper was dead wrong. My poor classmate couldn't get the math to come out as claimed, called me in a panic. I pointed out the mistake, told him that he was right, it wasn't a typo, it was a mistake and that shit happens.

    Some years ago a new PhD published a paper that completely overturned our understanding of a group of fishes' relationships. It was proofread, edited and competently typeset, met all the appearance standards and followed the accepted forms. The result was very surprising. Eventually some skeptical senior types who were experts on the group asked the kid for his data. He did the proper, delivered as requested. The seniors reanalyzed with the software he used and the options reported in his paper, got his results. Then they looked at his data, found that it had been miscoded. With the coding errors corrected they got the same old result.

    This type of error isn't rare. I uncovered something similar years ago when managing a large modeling project. Directed an RA to run a set of regressions. The results weren't at all what I expected. Asked him to check his work. Same unexpected results. Asked him to show me his data. Instantly spotted an alignment error in the data. With that error, which was pervasive in the data as he had it, absent from the data as should have been input, we got the expected.

    What is rare is reviewers who have the time, inclination and expertise needed to audit the work they're reviewing. Reviewers shouldn't have to do this, but its necessary. When I was a very young RA I was impressed with the importance of checking my work. I don't know whether this discipline is no longer transmitted, but errors continue to crop up.

    One of my collaborators recently told me he was preparing a couple of papers and wanted to put me on as co-author. My response was the usual. Let me see the drafts. One of his grad students once followed up one of my suggestions, wrote a paper with a major blunder in it. I saw the draft, insisted that they fix the blunder or take my name off.

  2. #22

    Join Date
    May 2004
    Location
    Montara, California
    Posts
    1,827

    Re: "...the field has real standards of scholarly validity."

    Quote Originally Posted by Dan Fromm View Post
    Systematics too.

    Cherry picking, another way of saying publication bias, is a real problem. So is poorly-done statistical analysis.
    I think the Amgen studies are very worrisome but bot entirely surprising. Replicable studies is a key to the overall scientific process but yet few people attempt to do so. It's more of an ideal than a practice. Which, if you remember that scientists are just people, invariably leads to all sorts of opportunities for a little make-believe.

    I suspect Amgen's goal was to try to build a better foundation for their research bu going back and trying to see what is true and what is not in order to better plan their own future research--why keep wasting money on mirages?

    --Darin

  3. #23

    Join Date
    Dec 2001
    Location
    NJ
    Posts
    8,484

    Re: "...the field has real standards of scholarly validity."

    Darin, I'm not sure that there's much flat-out dishonesty outside the pharmaceutical industry.

    I've seen a lot of fshing in the data to find something, anything, that looks real. This is a major sin, another form of cherry picking. I once nailed a contractor in a collections dispute and its econometrician for fishing. The data they used convicted them and data we had on other contractors' performance (they didn't have it, didn't challenge my analysis or ask for it) executed and buried them.

    There are often attempts to find subsets of experimental subjects in which the treatment (medical, marketing) sems to be effective. People in my group used to comfort sponsors of failed (as in, impossible to believe the idea would make money) marketing trials with the news that the marketing treatment worked for a tiny segment but not, alas, for enough prospects to make it profitable. There's a literature on the appropriate statistical treatment of multiple pairwise comparisons, look up, e.g., Bonferroni correction, and lose heart.

    Wishful thinking, sometimes spelled obtaining more funding, is a killer.

  4. #24
    Moderator
    Join Date
    Jan 2001
    Posts
    8,653

    Re: "...the field has real standards of scholarly validity."

    Dan - not your field, but you will appreciate the issues raised here.

  5. #25

    Join Date
    Dec 2001
    Location
    NJ
    Posts
    8,484

    Re: "...the field has real standards of scholarly validity."

    Oren, thanks for the link. We fought that battle quite a while ago. Stepwise fitting methods are sinful.

    I first became aware that the secret police were collecting large volumes of telephone company message records in late September, 2001. This from close reading of a story in the Times, not from discussions with colleagues I believe were assisting the secret police. I have a little experience with AT&T long distance, local, and cellular message records, also a little experience with building targeting models. I long ago concluded that the secret police are having us on and are hiding the fact that they're lying to us behind claims of secrecy. They can't possibly have a large enough training sample (containing known bad guys) to build a good model that uses message records to identify bad guys. And a good model that purports to find extremely rare bad guys using message records will (a) fail to find many of them and (b) will generate unmanageably large numbers of false positives. It is indeed possible that the secret police have other data no one's talking about and that they're not boasting about having that will improve their ability to find bad guys before they misbehave, but given bad guys rarity its hard to believe that a targeting model will do the job. Good old fashioned police work, yes. Number magic performed by very bright computer scientists and statisticians, no.

  6. #26
    Moderator
    Join Date
    Jan 2001
    Posts
    8,653

    Re: "...the field has real standards of scholarly validity."

    Dan - thanks, that is a terrific analog to the biomarker problem.

    Also flags a related conceptual tangle that people stumble into all the time - attributing predictive value to a noisy diagnostic without taking into account the base rate of prevalence.

  7. #27
    Tin Can's Avatar
    Join Date
    Dec 2011
    Posts
    22,505

    Re: "...the field has real standards of scholarly validity."

    This thread is scarier than the junkie thread.

    not LOL

  8. #28

    Join Date
    Jan 2007
    Location
    Sonora, California
    Posts
    1,475

    Re: "...the field has real standards of scholarly validity."

    Quote Originally Posted by Dan Fromm
    They can't possibly have a large enough training sample (containing known bad guys) to build a good model that uses message records to identify bad guys. And a good model that purports to find extremely rare bad guys using message records will (a) fail to find many of them and (b) will generate unmanageably large numbers of false positives. It is indeed possible that the secret police have other data no one's talking about and that they're not boasting about having that will improve their ability to find bad guys before they misbehave, but given bad guys rarity its hard to believe that a targeting model will do the job. Good old fashioned police work, yes. Number magic performed by very bright computer scientists and statisticians, no.

    Is this just you being cranky again or are you throwing statisticians under the bus here? I will assert that it is not fair to throw us all under the bus. Any (honest) statistician with even a modest formal education in the field would very quickly point out the folly of trying to develop a model as you have described above.


    EDIT: I do however, agree that the prevalence of poorly done statistical analyses is alarmingly high. I've always imagined that this was being perpetrated by individuals with little or no formal education in statistics. My own experience working with many very intelligent and highly educated technical people is that they 1) do not understand statistics and 2) discount its importance. The ready availability of powerful statistical software actually makes the situation worse. It lets non-statisticians perform statistical analyses that they have absolutely no understanding of.

  9. #29
    (Shrek)
    Join Date
    Mar 2011
    Location
    Montreal
    Posts
    2,044

    Re: "...the field has real standards of scholarly validity."

    Quote Originally Posted by Dan Fromm View Post
    I first became aware that the secret police were collecting large volumes of telephone company message records in late September, 2001. This from close reading of a story in the Times, not from discussions with colleagues I believe were assisting the secret police. I have a little experience with AT&T long distance, local, and cellular message records, also a little experience with building targeting models. I long ago concluded that the secret police are having us on and are hiding the fact that they're lying to us behind claims of secrecy. They can't possibly have a large enough training sample (containing known bad guys) to build a good model that uses message records to identify bad guys. And a good model that purports to find extremely rare bad guys using message records will (a) fail to find many of them and (b) will generate unmanageably large numbers of false positives. It is indeed possible that the secret police have other data no one's talking about and that they're not boasting about having that will improve their ability to find bad guys before they misbehave, but given bad guys rarity its hard to believe that a targeting model will do the job. Good old fashioned police work, yes. Number magic performed by very bright computer scientists and statisticians, no.
    Everything I've seen points to the simple fact that their system is completely useless unless they also have the content of phone calls (using speech-to-text software) and messages for analysis. And even with that, they don't have nearly the manpower they would need to follow up on each potential positive. Kinda makes me wonder if the system was built for some other purpose.

    Edit: if this is getting too political for this forum I will cease and desist.

  10. #30

    Join Date
    Dec 2001
    Location
    NJ
    Posts
    8,484

    Re: "...the field has real standards of scholarly validity."

    Brad, are you a statistician? I was trained as an econometrician, built models at a variety of scales that seem to have worked, and have worked as an applied statistician. Designing and evaluating market trials, rescuing marketing trials run without control groups (mistakes happen, especially when corporate IT groups are involved), building and validating targeting models. I'm well acquainted with the targeting models many of my colleagues made for marketing purposes. They're usefully better than random selection, far from good enough for forensics.

    You accuse me of building straw men. Fair enough, bricks can't be made without straw and targeting models can't be made with samples that contain no targets. I wish I could agree with you about the models I described but they are exactly what Snowden's revelations make clear NSA believes it has. Target models are implicit in what computer scientists call machine learning. If NSA isn't fitting explicit targeting models and is doing what it represents as data mining, then its using unvalidated profiles pulled from dark and smelly places. Bad practice, also dishonest.

    As for honesty, there are econometricians who are dishonest or incompetent. I've run up against enough of 'em. The same seems to be true of statisticians. We've had pharmaceutical company statisticians in for interviews, many have complained about extreme pressure to be, um, creative. Some bent under it, others didn't.

Similar Threads

  1. Validity of "Linhof Select"
    By Duane Polcou in forum Lenses & Lens Accessories
    Replies: 61
    Last Post: 11-Aug-2011, 17:31
  2. 150mm Apo Symmar-L "Real-World" Film Coverage
    By Mike1234 in forum Lenses & Lens Accessories
    Replies: 12
    Last Post: 30-Aug-2009, 08:01
  3. William Eggleston "In the Real World"
    By tim atherton in forum On Photography
    Replies: 7
    Last Post: 18-Sep-2007, 00:11
  4. "Real" Shen Hao and Tachihara bellows numbers?
    By C. D. Keth in forum Cameras & Camera Accessories
    Replies: 10
    Last Post: 19-Dec-2006, 14:04

Bookmarks

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •