Rand Study: Less than 1 million Americans lost health insurance due to ACA changes

  • Mark FloresMark Flores

    Vice President and Co-Founder at AVYM Corporation

    Michael-What do you make of this Rand Study, (http://www.rand.org/content/dam/rand/pubs/research_reports/RR600/RR656/RAND_RR656.pdf) a traditionally right leaning group, that suggests that at least 9.3 million more Americans have health insurance now than in September 2013, virtually all of them as a result of the law?

    Additionally, as summarized in an LA Times article,
    (http://www.latimes.com/business/hiltzik/la-fi-mh-rand-20140408,0,6208659.column#ixzz2yLGPAsYt) the Rand study confirms other surveys that placed the number of people who lost their old insurance and did not or could not replace it — the focus of an enormous volume of anti-Obamacare rhetoric — at less than 1 million. The Rand experts call this a “very small” number, less than 1% of the U.S. population age 18 to 64.
    Rand acknowledges that its figures have limitations — they’re based on a survey sampling, meaning that the breakdowns are subject to various margins of error, and they don’t include much of the surge in enrollments in late March and early April. Those 3.2-million sign-ups not counted by Rand could “dramatically affect” the figures on total insureds, the organization said.

  • Michael A. S. Guth, Ph.D., J.D.

    ►Health Economist | Population Health Strategist | Healthcare IT Program Manager| Healthcare Management Consulting◄

    Interesting is my first comment. If the number of those who lost their old insurance is really less than 1 million (less than 1% of the population), then that would make an effective TV commercial for Democratic candidates this election year. “Less than 1% of people lost their existing plan….” In my case, Humana gave me the option of keeping my insurance for one more year or terminating coverage with the purchase of a new plan on the marketplace. I could argue either way that I should or should not be counted among those who lost insurance, because I had a delay on my loss but planned to get a new policy anyway.

    My intuition was that those who lost their insurance coverage would be closer to 5 million, because the insurers were free to discharge policy holders, and many insurers made it clear that they wanted to jettison their existing (medically underwritten) individual policy customers in favor of customers with the higher premium policies with no medical underwriting. But my intuition was based on the number of media stories about people losing coverage: was it a case of media distortion of reality?

    Here are two uncertainties with the Rand modeling effort. (1) “The HROS is conducted using the RAND American Life Panel, a nationally representative panel of individuals who
    regularly participate in surveys.” What kind of people regularly participate in surveys? People who like to hear themselves talk? People who can’t wait to have an opportunity to express their opinions? Is that a biased sample at the outset. (2) “We extrapolated from our sample to estimate the number of people in the population as a whole in each insurance category, as discussed in more detail below.” The details indicate various weighted averages were used. Who determined the weights? Rand said it developed the weights in an unbiased manner using census data. I suspect different econometricians could develop different weights all based on an “unbiased” method. Rand states “5 percent of respondents in our survey would be associated with 9.9 million individuals in the population as a whole.”

    They report in Table 2, that 26.2% of the sample (52 million Americans) were uninsured in 2013. That seems about right. Previous estimates were set at 49 million uninsured. This much passes the sanity check. Overall, given the somewhat conservative leanings of Rand Corp, this study seems like good news for the Obama Administration.

Big data: are we making a big mistake? By Tim Harford

By Tim Harford March 28, 2014 11:38 am

Big data is a vague term for a massive phenomenon that has rapidly become an obsession with entrepreneurs, scientists, governments and the media

Five years ago, a team of researchers from Google announced a remarkable achievement in one of the world’s top scientific journals, Nature. Without needing the results of a single medical check-up, they were nevertheless able to track the spread of influenza across the US. What’s more, they could do it more quickly than the Centers for Disease Control and Prevention (CDC). Google’s tracking had only a day’s delay, compared with the week or more it took for the CDC to assemble a picture based on reports from doctors’ surgeries. Google was faster because it was tracking the outbreak by finding a correlation between what people searched for online and whether they had flu symptoms.

Not only was “Google Flu Trends” quick, accurate and cheap, it was theory-free. Google’s engineers didn’t bother to develop a hypothesis about what search terms – “flu symptoms” or “pharmacies near me” – might be correlated with the spread of the disease itself. The Google team just took their top 50 million search terms and let the algorithms do the work.

The success of Google Flu Trends became emblematic of the hot new trend in business, technology and science: “Big Data”. What, excited journalists asked, can science learn from Google?

As with so many buzzwords, “big data” is a vague term, often thrown around by people with something to sell. Some emphasise the sheer scale of the data sets that now exist – the Large Hadron Collider’s computers, for example, store 15 petabytes a year of data, equivalent to about 15,000 years’ worth of your favourite music.

But the “big data” that interests many companies is what we might call “found data”, the digital exhaust of web searches, credit card payments and mobiles pinging the nearest phone mast. Google Flu Trends was built on found data and it’s this sort of data that interests me here. Such data sets can be even bigger than the LHC data – Facebook’s is – but just as noteworthy is the fact that they are cheap to collect relative to their size, they are a messy collage of datapoints collected for disparate purposes and they can be updated in real time. As our communication, leisure and commerce have moved to the internet and the internet has moved into our phones, our cars and even our glasses, life can be recorded and quantified in a way that would have been hard to imagine just a decade ago.

Cheerleaders for big data have made four exciting claims, each one reflected in the success of Google Flu Trends: that data analysis produces uncannily accurate results; that every single data point can be captured, making old statistical sampling techniques obsolete; that it is passé to fret about what causes what, because statistical correlation tells us what we need to know; and that scientific or statistical models aren’t needed because, to quote “The End of Theory”, a provocative essay published in Wired in 2008, “with enough data, the numbers speak for themselves”.

Unfortunately, these four articles of faith are at best optimistic oversimplifications. At worst, according to David Spiegelhalter, Winton Professor of the Public Understanding of Risk at Cambridge university, they can be “complete bollocks. Absolute nonsense.”

Found data underpin the new internet economy as companies such as Google, Facebook and Amazon seek new ways to understand our lives through our data exhaust. Since Edward Snowden’s leaks about the scale and scope of US electronic surveillance it has become apparent that security services are just as fascinated with what they might learn from our data exhaust, too.

Consultants urge the data-naive to wise up to the potential of big data. A recent report from the McKinsey Global Institute reckoned that the US healthcare system could save $300bn a year – $1,000 per American – through better integration and analysis of the data produced by everything from clinical trials to health insurance transactions to smart running shoes.

But while big data promise much to scientists, entrepreneurs and governments, they are doomed to disappoint us if we ignore some very familiar statistical lessons.

“There are a lot of small data problems that occur in big data,” says Spiegelhalter. “They don’t disappear because you’ve got lots of the stuff. They get worse.”

Four years after the original Nature paper was published, Nature News had sad tidings to convey: the latest flu outbreak had claimed an unexpected victim: Google Flu Trends. After reliably providing a swift and accurate account of flu outbreaks for several winters, the theory-free, data-rich model had lost its nose for where flu was going. Google’s model pointed to a severe outbreak but when the slow-and-steady data from the CDC arrived, they showed that Google’s estimates of the spread of flu-like illnesses were overstated by almost a factor of two.

The problem was that Google did not know – could not begin to know – what linked the search terms with the spread of flu. Google’s engineers weren’t trying to figure out what caused what. They were merely finding statistical patterns in the data. They cared about correlation rather than causation. This is common in big data analysis. Figuring out what causes what is hard (impossible, some say). Figuring out what is correlated with what is much cheaper and easier. That is why, according to Viktor Mayer-Schönberger and Kenneth Cukier’s book, Big Data, “causality won’t be discarded, but it is being knocked off its pedestal as the primary fountain of meaning”.

But a theory-free analysis of mere correlations is inevitably fragile. If you have no idea what is behind a correlation, you have no idea what might cause that correlation to break down. One explanation of the Flu Trends failure is that the news was full of scary stories about flu in December 2012 and that these stories provoked internet searches by people who were healthy. Another possible explanation is that Google’s own search algorithm moved the goalposts when it began automatically suggesting diagnoses when people entered medical symptoms.

Google Flu Trends will bounce back, recalibrated with fresh data – and rightly so. There are many reasons to be excited about the broader opportunities offered to us by the ease with which we can gather and analyse vast data sets. But unless we learn the lessons of this episode, we will find ourselves repeating it.

Statisticians have spent the past 200 years figuring out what traps lie in wait when we try to understand the world through data. The data are bigger, faster and cheaper these days – but we must not pretend that the traps have all been made safe. They have not.

In 1936, the Republican Alfred Landon stood for election against President Franklin Delano Roosevelt. The respected magazine, The Literary Digest, shouldered the responsibility of forecasting the result. It conducted a postal opinion poll of astonishing ambition, with the aim of reaching 10 million people, a quarter of the electorate. The deluge of mailed-in replies can hardly be imagined but the Digest seemed to be relishing the scale of the task. In late August it reported, “Next week, the first answers from these ten million will begin the incoming tide of marked ballots, to be triple-checked, verified, five-times cross-classified and totalled.”

After tabulating an astonishing 2.4 million returns as they flowed in over two months, The Literary Digest announced its conclusions: Landon would win by a convincing 55 per cent to 41 per cent, with a few voters favouring a third candidate.

The election delivered a very different result: Roosevelt crushed Landon by 61 per cent to 37 per cent. To add to The Literary Digest’s agony, a far smaller survey conducted by the opinion poll pioneer George Gallup came much closer to the final vote, forecasting a comfortable victory for Roosevelt. Mr Gallup understood something that The Literary Digest did not. When it comes to data, size isn’t everything.

Opinion polls are based on samples of the voting population at large. This means that opinion pollsters need to deal with two issues: sample error and sample bias.

Sample error reflects the risk that, purely by chance, a randomly chosen sample of opinions does not reflect the true views of the population. The “margin of error” reported in opinion polls reflects this risk and the larger the sample, the smaller the margin of error. A thousand interviews is a large enough sample for many purposes and Mr Gallup is reported to have conducted 3,000 interviews.

But if 3,000 interviews were good, why weren’t 2.4 million far better? The answer is that sampling error has a far more dangerous friend: sampling bias. Sampling error is when a randomly chosen sample doesn’t reflect the underlying population purely by chance; sampling bias is when the sample isn’t randomly chosen at all. George Gallup took pains to find an unbiased sample because he knew that was far more important than finding a big one.

The Literary Digest, in its quest for a bigger data set, fumbled the question of a biased sample. It mailed out forms to people on a list it had compiled from automobile registrations and telephone directories – a sample that, at least in 1936, was disproportionately prosperous. To compound the problem, Landon supporters turned out to be more likely to mail back their answers. The combination of those two biases was enough to doom The Literary Digest’s poll. For each person George Gallup’s pollsters interviewed, The Literary Digest received 800 responses. All that gave them for their pains was a very precise estimate of the wrong answer.

The big data craze threatens to be The Literary Digest all over again. Because found data sets are so messy, it can be hard to figure out what biases lurk inside them – and because they are so large, some analysts seem to have decided the sampling problem isn’t worth worrying about. It is.

Professor Viktor Mayer-Schönberger of Oxford’s Internet Institute, co-author of Big Data, told me that his favoured definition of a big data set is one where “N = All” – where we no longer have to sample, but we have the entire background population. Returning officers do not estimate an election result with a representative tally: they count the votes – all the votes. And when “N = All” there is indeed no issue of sampling bias because the sample includes everyone.

But is “N = All” really a good description of most of the found data sets we are considering? Probably not. “I would challenge the notion that one could ever have all the data,” says Patrick Wolfe, a computer scientist and professor of statistics at University College London.

An example is Twitter. It is in principle possible to record and analyse every message on Twitter and use it to draw conclusions about the public mood. (In practice, most researchers use a subset of that vast “fire hose” of data.) But while we can look at all the tweets, Twitter users are not representative of the population as a whole. (According to the Pew Research Internet Project, in 2013, US-based Twitter users were disproportionately young, urban or suburban, and black.)

There must always be a question about who and what is missing, especially with a messy pile of found data. Kaiser Fung, a data analyst and author of Numbersense, warns against simply assuming we have everything that matters. “N = All is often an assumption rather than a fact about the data,” he says.

Consider Boston’s Street Bump smartphone app, which uses a phone’s accelerometer to detect potholes without the need for city workers to patrol the streets. As citizens of Boston download the app and drive around, their phones automatically notify City Hall of the need to repair the road surface. Solving the technical challenges involved has produced, rather beautifully, an informative data exhaust that addresses a problem in a way that would have been inconceivable a few years ago. The City of Boston proudly proclaims that the “data provides the City with real-time information it uses to fix problems and plan long term investments.”

Yet what Street Bump really produces, left to its own devices, is a map of potholes that systematically favours young, affluent areas where more people own smartphones. Street Bump offers us “N = All” in the sense that every bump from every enabled phone can be recorded. That is not the same thing as recording every pothole. As Microsoft researcher Kate Crawford points out, found data contain systematic biases and it takes careful thought to spot and correct for those biases. Big data sets can seem comprehensive but the “N = All” is often a seductive illusion.

Who cares about causation or sampling bias, though, when there is money to be made? Corporations around the world must be salivating as they contemplate the uncanny success of the US discount department store Target, as famously reported by Charles Duhigg in The New York Times in 2012. Duhigg explained that Target has collected so much data on its customers, and is so skilled at analysing that data, that its insight into consumers can seem like magic.

Duhigg’s killer anecdote was of the man who stormed into a Target near Minneapolis and complained to the manager that the company was sending coupons for baby clothes and maternity wear to his teenage daughter. The manager apologised profusely and later called to apologise again – only to be told that the teenager was indeed pregnant. Her father hadn’t realised. Target, after analysing her purchases of unscented wipes and magnesium supplements, had.

Statistical sorcery? There is a more mundane explanation.

“There’s a huge false positive issue,” says Kaiser Fung, who has spent years developing similar approaches for retailers and advertisers. What Fung means is that we didn’t get to hear the countless stories about all the women who received coupons for babywear but who weren’t pregnant.

Hearing the anecdote, it’s easy to assume that Target’s algorithms are infallible – that everybody receiving coupons for onesies and wet wipes is pregnant. This is vanishingly unlikely. Indeed, it could be that pregnant women receive such offers merely because everybody on Target’s mailing list receives such offers. We should not buy the idea that Target employs mind-readers before considering how many misses attend each hit.

In Charles Duhigg’s account, Target mixes in random offers, such as coupons for wine glasses, because pregnant customers would feel spooked if they realised how intimately the company’s computers understood them.

Fung has another explanation: Target mixes up its offers not because it would be weird to send an all-baby coupon-book to a woman who was pregnant but because the company knows that many of those coupon books will be sent to women who aren’t pregnant after all.

None of this suggests that such data analysis is worthless: it may be highly profitable. Even a modest increase in the accuracy of targeted special offers would be a prize worth winning. But profitability should not be conflated with omniscience.

In 2005, John Ioannidis, an epidemiologist, published a research paper with the self-explanatory title, “Why Most Published Research Findings Are False”. The paper became famous as a provocative diagnosis of a serious issue. One of the key ideas behind Ioannidis’s work is what statisticians call the “multiple-comparisons problem”.

It is routine, when examining a pattern in data, to ask whether such a pattern might have emerged by chance. If it is unlikely that the observed pattern could have emerged at random, we call that pattern “statistically significant”.

The multiple-comparisons problem arises when a researcher looks at many possible patterns. Consider a randomised trial in which vitamins are given to some primary schoolchildren and placebos are given to others. Do the vitamins work? That all depends on what we mean by “work”. The researchers could look at the children’s height, weight, prevalence of tooth decay, classroom behaviour, test scores, even (after waiting) prison record or earnings at the age of 25. Then there are combinations to check: do the vitamins have an effect on the poorer kids, the richer kids, the boys, the girls? Test enough different correlations and fluke results will drown out the real discoveries.

There are various ways to deal with this but the problem is more serious in large data sets, because there are vastly more possible comparisons than there are data points to compare. Without careful analysis, the ratio of genuine patterns to spurious patterns – of signal to noise – quickly tends to zero.

Worse still, one of the antidotes to the multiple-comparisons problem is transparency, allowing other researchers to figure out how many hypotheses were tested and how many contrary results are languishing in desk drawers because they just didn’t seem interesting enough to publish. Yet found data sets are rarely transparent. Amazon and Google, Facebook and Twitter, Target and Tesco – these companies aren’t about to share their data with you or anyone else.

New, large, cheap data sets and powerful analytical tools will pay dividends – nobody doubts that. And there are a few cases in which analysis of very large data sets has worked miracles. David Spiegelhalter of Cambridge points to Google Translate, which operates by statistically analysing hundreds of millions of documents that have been translated by humans and looking for patterns it can copy. This is an example of what computer scientists call “machine learning”, and it can deliver astonishing results with no preprogrammed grammatical rules. Google Translate is as close to theory-free, data-driven algorithmic black box as we have – and it is, says Spiegelhalter, “an amazing achievement”. That achievement is built on the clever processing of enormous data sets.

But big data do not solve the problem that has obsessed statisticians and scientists for centuries: the problem of insight, of inferring what is going on, and figuring out how we might intervene to change a system for the better.

“We have a new resource here,” says Professor David Hand of Imperial College London. “But nobody wants ‘data’. What they want are the answers.”

To use big data to produce such answers will require large strides in statistical methods.

“It’s the wild west right now,” says Patrick Wolfe of UCL. “People who are clever and driven will twist and turn and use every tool to get sense out of these data sets, and that’s cool. But we’re flying a little bit blind at the moment.”

Statisticians are scrambling to develop new methods to seize the opportunity of big data. Such new methods are essential but they will work by building on the old statistical lessons, not by ignoring them.

Recall big data’s four articles of faith. Uncanny accuracy is easy to overrate if we simply ignore false positives, as with Target’s pregnancy predictor. The claim that causation has been “knocked off its pedestal” is fine if we are making predictions in a stable environment but not if the world is changing (as with Flu Trends) or if we ourselves hope to change it. The promise that “N = All”, and therefore that sampling bias does not matter, is simply not true in most cases that count. As for the idea that “with enough data, the numbers speak for themselves” – that seems hopelessly naive in data sets where spurious patterns vastly outnumber genuine discoveries.

“Big data” has arrived, but big insights have not. The challenge now is to solve new problems and gain new answers – without making the same old statistical mistakes on a grander scale than ever.

http://www.ft.com/cms/s/2/21a6e7d8-b479-11e3-a09a-00144feabdc0.html#axzz2xKP6XqP3

Tim Harford’s latest book is ‘The Undercover Economist Strikes Back’. To comment on this article please post below, or email magazineletters@ft.com

Google Flu Trends’ Failure Shows Good Data > Big Data by Kaiser Fung

20140326_4

In their best-selling 2013 book Big Data: A Revolution That Will Transform How We Live, Work and Think, authors Viktor Mayer-Schönberger and Kenneth Cukier selected Google Flu Trends (GFT) as the lede of chapter one. They explained how Google’s algorithm mined five years of web logs, containing hundreds of billions of searches, and created a predictive model utilizing 45 search terms that “proved to be a more useful and timely indicator [of flu] than government statistics with their natural reporting lags.”

Unfortunately, no. The first sign of trouble emerged in 2009, shortly after GFT launched, when it completely missed the swine flu pandemic. Last year, Nature reported that Flu Trends overestimated by 50% the peak Christmas season flu of 2012. Last week came the most damning evaluation yet.  In Science, a team of Harvard-affiliated researchers published their findings that GFT has over-estimated the prevalence of flu for 100 out of the last 108 weeks; it’s been wrong since August 2011. The Science article further points out that a simplistic forecasting model—a model as basic as one that predicts the temperature by looking at recent-past temperatures—would have forecasted flu better than GFT.

In short, you wouldn’t have needed big data at all to do better than Google Flu Trends. Ouch.

In fact, GFT’s poor track record is hardly a secret to big data and GFT followers like me, and it points to a little bit of a big problem in the big data business that many of us have been discussing: Data validity is being consistently overstated. As the Harvard researchers warn: “The core challenge is that most big data that have received popular attention are not the output of instruments designed to produce valid and reliable data amenable for scientific analysis.”

The amount of data still tends to dominate discussion of big data’s value. But more data in itself does not lead to better analysis, as amply demonstrated with Flu Trends. Large datasets don’t guarantee valid datasets. That’s a bad assumption, but one that’s used all the time to justify the use of and results from big data projects. I constantly hear variations on the “N=All therefore it’s good data” argument, from real data analyts: “Since Google has 80% of the search market, we can ignore the other search engines. They don’t matter.” Or, “Since Facebook has a billion accounts, it has substantively everyone.”

Poor assumptions are neither new nor unpredictable. When the mainstream economists collectively failed to predict the housing bubble: their neoclassical model is built upon several assumptions including the Efficient Markets Hypothesis, which suggests that market prices incorporate all available information, and, as Paul Krugman says, leads to the “general belief that bubbles just don’t happen.”

In the wake of epic fails like these, the natural place to look for answers is in how things are being defined in the first place. In the business community, big data’s definition is often some variation on McKinsey’s widely-circulated big data report (PDF), which defines big data as “datasets whose size is beyond the ability of typical database software tools to capture, store, manage, and analyze.”

Can we do better? I started asking myself and other data analysts what are the key differences between datasets that underlie today’s GFT-like projects and the datasets we were using five to 10 years ago. This has led to what I call the OCCAM framework, a more honest assessment of the current state of big data and the assumptions lurking in it.

Big data is:

Observational: much of the new data come from sensors or tracking devices that monitor continuously and indiscriminately without design, as opposed to questionnaires, interviews, or experiments with purposeful design

Lacking Controls: controls are typically unavailable, making valid comparisons and analysis more difficult

Seemingly Complete: the availability of data for most measurable units and the sheer volume of data generated is unprecedented, but more data creates more false leads and blind alleys, complicating the search for meaningful, predictable structure

Adapted: third parties collect the data, often for a purposes unrelated to the data scientists’, presenting challenges of interpretation

Merged: different datasets are combined, exacerbating the problems relating to lack of definition and misaligned objectives

This is far less optimistic a definition, but a far more honest appraisal of the current state of big data.

The worst outcome from the Science article and the OCCAM framework, though, would be to use them as evidence that big data’s “not worth it.” Honest appraisals are meant to create honest progress, to advance the discipline rather than fuel the fad.

Progress will come when the companies involved in generating and crunching OCCAM datasets restrain themselves from overstating their capabilities without properly measuring their results. The authors of the Science article should be applauded for their bravery in raising this thorny issue. They did a further service to the science community by detailing the difficulty in assessing and replicating the algorithm developed by Google Flu Trends researchers. They discovered that the published information about the algorithm is both incomplete and inaccurate. Using the reserved language of academics, the authors noted: “Oddly, the few search terms offered in the papers [by Google researchers explaining their algorithm] do not seem to be strongly related with either GFT or the CDC data—we surmise that the authors felt an unarticulated need to cloak the actual search terms identified.” [emphasis added]

In other words, Google owes us an explanation as to whether it published doctored data without disclosure, or if its highly-touted predictive model is so inaccurate that the search terms found to be the most predictive a few years ago are no longer predictive. If companies want to participate in science, they need to behave like scientists.

Like the Harvard researchers, I am excited by the promises of data analytics. But I’d like to see our industry practice what we preach, conducting honest assessment of our own successes and failures. In the meantime, outsiders should be attentive to the challenges of big data analysis, as summarized in the OCCAM framework, and apply considerable caution in interpreting such analyses.

More blog posts by Kaiser Fung

More on: Google, Internet, Technology

Kaiser Fung

Kaiser Fung is a professional statistician for Vimeo and author of Junk Charts, a blog devoted to the critical examination of data and graphics in the mass media. His latest book is Number Sense: How to Use Big Data to Your Advantage. He holds an MBA from Harvard Business School, in addition to degrees from Princeton and Cambridge Universities, and teaches statistics at New York University.

http://blogs.hbr.org/2014/03/google-flu-trends-failure-shows-good-data-big-data/

Heart Valve Surgery Examples of ICD-10-PCS Alpha Indexes

There are instances in which the Alpha Index might not prove helpful in choosing the correct root operation.  Let’s consider valvuloplasty which is defined as “surgical reconstruction of a deformed cardiac valve for the relief of stenosis or incompetence.”  In the ICD-10-PCS Alpha Index, the main term provides three potential root operations:

Valvuloplasty

see Repair, Heart and Great Vessels 02Q

see Replacement, Heart and Great Vessels 02R

see Supplement, Heart and Great Vessels 02U

Let’s review several examples of procedures that reconstruct a deformed heart valve:

Example 1:  Patient with severe aortic valve stenosis.  She underwent implementation of a transcatheter 26-mm Edwards SAPIEN heart valve.

Example 2:  Patient with severe tricuspid regurgitation.  He underwent a valvuloplasty completed with annuloplasty ring sutured into place.

Example 3:  A newborn underwent a percutaneous balloon pulmonary valvuloplasty for treatment of pulmonary valve stenosis.

In the first example, the SAPIEN heart valve replaces the diseased native valve.  Replacement is defined as “putting in or on biological or synthetic material that physically takes the place and/or function of all or a portion of a body part.”  In this instance, the Index leads you to Table 02R to construct the correct code.

In example 2, an annuloplasty ring does not replace the entire valve.  Rather, it improves the shape of the annulus and reestablishes its physiological configuration.  So, this is not Replacement.  Nor is it Repair, because a device is used.  The correct root operation is Supplement which is defined as “putting in or on biological or synthetic material that physically reinforces and/or augments the function of a portion of a body part.”

In the final example, a balloon is placed in the pulmonary valve and inflated in order to enlarge the valve and increase blood flow.  Both Replacement and Supplement include a device that remains in place after the procedure is completed.  No device remains in the body in this procedure.  So, would Repair be the correct root operation?  No.

This procedure meets the definition for Dilation which is “expanding an orifice or the lumen of a tubular body part.”  To access the correct Table, look up:  Dilation – Valve – Pulmonary which leads you to the appropriate table:  027H.

The OBJECTIVE of the procedure always determines the root operation and correct ICD-10-PCS code assignment.    It is the coder’s responsibility to determine what the documentation in the medical record equates to in the PCS definitions. The physician is not expected to use the terms used in PCS code descriptions, nor is the coder required to query the physician when the correlation between the documentation and the defined PCS terms is clear.

Humorous ICD-10 Codes by William Entriken

http://fulldecent.blogspot.com/2012/02/new-medical-bill-theres-code-for-that.html

SITUATION: You’ve been involved in a water-skiing accident where your skis have caught fire and now you are being rushed to the emergency room.

THE GOOD NEWS: There a code for that. ICD-10-CM, (“we have to pass the bill so that you can find out what is in it”) has anticipated this and your hospital will have no problem billing insurance for your treatment.

THE BAD NEWS: There’s 6 different codes for this situation and you’ll be delayed as the medical administrator has to choose one.

Records: Here are some actual medical diagnosis codes. In fact some of these could be used as front-page newspaper headlines it it ever happened:

Z3754 Sextuplets, all liveborn
W5922xS Struck by turtle, sequela
Z62891  Sibling rivalry
Z631 Problems in relationship with in-laws
V9107xD Burn due to water-skis on fire, subsequent encounter
T505x6A Underdosing of appetite depressants, initial encounter
V616xxD Passenger in heavy transport vehicle injured in collision with pedal cycle in traffic accident, subsequent encounter
V9733xD Sucked into jet engine, subsequent encounter
T63442S Toxic effect of venom of bees, intentional self-harm, sequela
Z621 Parental overprotection

Learning “Heart Age” Keeps Us Healthier by Beth Levine

When we go to the doctor for a checkup, we typically get lots of numbers thrown at us for our cholesterol levels, blood sugar, body-mass index, cardiovascular disease risk score, and the like. But most of us don’t pay much attention to the specifics of these numbers. After all, we don’t really know what they mean. Instead, we rely on the physician to tell us what needs to be raised or lowered to get healthy and what drugs we need to take to do that. So we typically leave the office with a vague notion of what is a little “off” in the body with no concrete plan about how to fix it. But now, researchers have tested a “Heart Age” calculator that presents the same likelihood of developing cardiovascular disease (CVD) as a percentage risk does, but in a way that patients can more realistically relate to–an estimated age of your heart.

The study, which took place at the University of the Balearic Islands in Palma, Spain, found that when provided with a heart age rather than a percentage likelihood of developing CVD within a decade, people gain a greater understanding of their risk of heart health problems and are more motivated to make the necessary lifestyle changes to improve them. Although CVD is a very common condition and the number one cause of death in both the Unites States and the world as a whole, it appears that many patients do not fully grasp the gravity of its potential danger when it’s presented as a percentage risk score–perhaps because they think they’ll beat the odds, just like they believe they’ll win lotto. As a result, they tend to take little action to increase heart health.

The subjects were 3,153 adults who all were provided with a thorough health assessment. They were randomly divided into three groups. The first group was offered a traditional percentage risk for developing CVD. The second group was given the exact same information but in Heart Age form, suggesting the age at which the patient’s heart truly functions at rather than their chronological age. The third group served as a control, receiving no data on their hearts but instead only guidelines for a generally healthy lifestyle. One year later, all of the participants had a follow-up medical exam during which heart health was checked again.

The volunteers who were in both of the groups given information about their CVD risks had a much greater drop in risk scores compared to their peers populating the control group. But the people provided with a Heart Age showed more significant improvements than those given the percentage risk, and they had been influenced by no additional interventions. What’s more, those in the Heart Age group reported adopting a greater number of positive lifestyle changes, including smoking cessation. The Heart Age subjects were found to quit smoking at a rate four times that of their counterparts in the risk scores group, and that is significant since smoking is considered one of the major risk factors for developing CVD within our control.

Often when a person is showing signs of damage from heart disease or at high risk, a doctor may prescribe statins to lower LDL cholesterol levels or high blood pressure medications such as beta blockers or ACE inhibitors. Improving diet, engaging in exercise, and making other lifestyle changes will most likely be encouraged, but many physicians focus on the pharmaceutical solution. However, this option comes with the chance of wide ranging side effects that can include depression, erectile dysfunction, insomnia, coughing, skin rash, memory loss, diabetes, and muscle pain that can be so awful when taking statins that patients discontinue their use.

Therefore, a study such as the present one is interesting in that merely mentioning the Heart Age–putting the situation into clear terms for those who may be 50 but living with the heart of a 65 year old–appeared to be enough to motivate previously lackadaisical patients to take action to improve their heart health.

Imagine even how much more effective a tool such as this could be if it was paired with some simple guidelines for how to reduce that age through nutritious diet and getting regular physical activity!

Do you need to take supplements? (by Jon Barron)

“You often hear doctors say that there’s no need to supplement if you eat a balanced diet. If only that were true. Fast-food diets consisting of burgers, fries, pizza, and sodas have minimal nutritional value beyond proteins, fats, and carbohydrates. And even if you try and eat a balanced diet, the food we eat today is not the same as the food eaten 50–100 years ago. We have to compensate for the loss of ‘value’ in our food.

  • The nutritional value of foods took an immediate huge hit when the world turned to fertilizer farming. But even after that, values have continued to fall over the last 60 years as soils have become ever more depleted of key minerals.
  • According to a Rutgers University study, it now takes nineteen ears of corn to equal the nutritional value of just one ear of corn grown in 1940.
  • There is less than half the nutritional protein in today’s wheat as in the wheat our grandparents ate. [Correspondingly, levels of gluten and gliadin are much higher. Ed.]

When you think about it, what we’ve done is exchanged quality for quantity. You can’t keep increasing your yield per acre, at the same time steadily depleting your soil year after year, and not expect to lose something in the process. And what’s been lost is the quality of our food.”

Comment on HDL supplements

Daniel,

Like you, I am genetically predisposed to have low HDL levels.  Getting cardiovascular exercise seven days per week did not cause any significant increase in HDL.  However, I was able to boost my HDL by approximately 20 – 25% by taking one supplement.  A different supplement, the virgin coconut oil mentioned in this post, raised by HDL 25 – 30%.  I thus obtained a 50% increase in HDL, which is the kind of efficacy claimed by pipeline pharmaceutical drugs.  The point is no matter how much a person is convinced his or her HDL won’t budge by genetics, these two supplements work and have therapeutic benefits that cannot even be matched by pharmaceutical drugs.

I am very familiar with Tricor, which is an Abbott Laboratories drug.  Tricor targets lowering triglycerides, not boosting HDL.  In fact, there is no credible clinical trial evidence that Tricor affects HDL, although some trials showed very modest increases in HDL.

On my most recent blood test from last week, my total cholesterol/HDL ratio was 2.9, which is outstanding.  In fact, it is the lowest value I have ever obtained in my life.  It is just one more piece of evidence that what I am saying really works — not just subjective opinion, proven/objective fact duplicated by thousands of others.

Where do you get your coconut oil?  Also, how much do you take?

I keep comparing prices but currently buy Vitacost brand from vitacost.com and Swanson brand from swansonvitamins.com  You will want to buy the 54 oz. container that costs about $20.

I should take 2 tablespoons per day — but it is not so easy in the summer.   In the winter, it is relatively easy as I use virgin coconut oil to sauté onions, add it to soups, and basically can add it to any meal that is heated. 

In the summer months, I often eat meals at room temperature or cold salads, so the virgin coconut oil is equivalent to eating a spoonful of lard.  It gags me to eat the fat, but I know it is essentially a wonder drug that is craved by the human body.

BTW, with Tricor, in my case, my HDL was elevated to 39 at my latest blood test results.  The highest it’s been.  However, I’ve tried other supplements and am not adverse to using them.  My father, lived to be 91.  Part of his longevity was due to his exercising (he jogged a mile a day in his basement), eating well, and taking supplements.  I saw the writing on the wall and emulated him by starting to take supplements at least 20 years ago.

If you cannot get your HDL into the 50s with virgin coconut oil, I will be surprised.  You can imagine the health benefits from raising HDL that much.

I am also doing an experiment suggested by a M.D. in Houston:  sublingual Niacin at 100 mg dose.  My HDL was high on the last blood test, but I don’t know the effect of the low dose sublingual Niacin until I can confirm the results in April.

 

 

Advice on Relative with Stage 4 Pancreatic Cancer

I did a simple Google search for “pancreatic cancer” and “intravenous Vitamin C”  and was pleasantly surprised to find dozens of web sites with articles, including research sponsored by the National Institutes of Health and clinical trials underway.   It is absolutely necessary to take the intravenous ascorbic acid (IV AA ) before any chemotherapy, as the chemotherapy will destroy significant parts of the body’s immune system, thereby reducing IV AA’s efficacy.

Please click on this link to read some articles for yourself.

https://www.google.com/search?sourceid=navclient&aq=&oq=pancreatic+cancer+%22intravenous+vitamin+c%22&ie=UTF-8&rlz=1T4SNNT_en___US409&q=pancreatic+cancer+%22intravenous+vitamin+c%22&gs_l=hp….0.0.0.10128………..0.#q=pancreatic+cancer+%22intravenous+vitamin+c%22&start=10

The following video summarizes in a nontechnical manner the key points that I made today:  (1) cancer cells love glucose or are “sugar feeders” as stated in the video, and (2) IV AA selectively destroys cancer cells while leaving normal cells unharmed.

http://ihealthtube.com/aspx/viewvideo.aspx?v=1357994a10890d1b

My web domain has an ongoing article on IV AA for cancer treatment that I update and expand every few months.  Please click here.  http://www.michaelguth.com/?p=41

Finally, as to your Synthroid use, my web site has an article on one doctor’s efforts to clarify the confusion over T3 & T4 combination drugs versus Synthroid.  http://www.michaelguth.com/?p=680  I suspect you will feel more energized if you switch from Synthroid to Armour Thyroid or Naturthroid.

Mike Guth

Armour Thyroid Preferred Over Synthroid

I just came back from a trip to Cleveland, OH, where I was speaking to a VP of a health insurance company, and she asked me about her Synthroid prescription.

This is what I wrote in a follow-up email to her:  ” Finally, as to your Synthroid use, my web site has an article on one doctor’s efforts to clarify the confusion over T3 & T4 combination drugs versus Synthroid.  http://www.michaelguth.com/?p=680  I suspect you will feel more energized if you switch from Synthroid to Armour Thyroid or Naturthroid.”

You and I both have hypothyroidism, as do a majority of Americans over age 50, although most just think they are slowing down with age and have never had their thyroid hormones tested.  Keep in mind that T3 is 500 times more active than T4, and if you are lethargic, then your body is really craving T3 and not T4.  Synthroid is nothing more than synthetic T4.  I would consider it medical malpractice for an endocrinologist to prescribe Synthroid to a male or female patient over 50, because he should be smart enough to know the body’s ability to convert T4 to T3 diminishes with age and is significantly impaired by age 50.

For your blood tests, you do NOT want to test T3 and T4 total levels.  Instead, we only care about the unbound hormones, and thus you should test Free T3 and Free T4.  You will feel your best when your diurnal peak levels of thyroid hormones around 8:30 AM each morning are in the range of [3.4 - 4.2] for Free T3 and [1.45 - 1.77] for Free T4.  You can achieve those levels fairly easily with Armour Thyroid, but it will be almost impossible to reach the optimal Free T3 levels using Synthroid.  Please see the article on my web site for more information.