During our years working for hospitals and insurance providers, there were many similar issues that caused unnecessarily increased costs. Healthcare providers, like many other industries, are facing drastically increased costsand decreased margins. Unlike tech companies that had the ability to develop technology to help their basic operations automate and scale, hospitals and healthcare providers have not. This has caused multiple administrative and analytical costs to grow year over year because of the lack of automation and process improvement. Some of the biggest costs that we have seen heavily weigh on healthcare providers are billing and financial processes, fraud detection and third-party contract management. Automation and process improvements are needed in these three areas if healthcare providers want to start reducing some of their biggest costs. Billing And Financial Analysis Managing billing and financial analysis can be very tedious tasks that require a combination of accounting discipline and entrepreneurial spirit. They are necessary practices that need to happen to manage expenses and revenue. The problem is that even in billion dollar healthcare organizations lack automated billing and financial processes. Instead, hundreds of thousands to millions of dollars are often manually managed in excel spreadsheets. The processes to get the data and then slice and dice wastes analysts time. There are lots of ways that these financial tasks can start to weigh on the financial teams as they are not scalable. As hospitals and insurance providers merge and grow the problem amplifies and becomes even more difficult to manually manage. Typically, hospitals will approach this problem by increasing their staff rather than create a process or system that can continue to scale basic financial analytics and billing. This leads to increased operational costs that are difficult to manage without laying off employees(this isn’t even considering the costs of hiring, firing, rehiring…). The solution is automation(mic drop….). Okay, automation is easier said than done. It requires a combination of buy-in from multiple stakeholders to financially back the projects and trust from directors and manager who read the reports and output of these automated systems. However, when executed well, these systems can reduce hundreds of hours of manual work. It seems complicated, but often times automation, when done well really isn’t. Many times it actually simplifies the overall workload on your analysts. There might need to be some upskilling in basic SQL. However, we are living in a data driven world and SQL is the data language(even NOSQL databases have SQL layers because that is what we humans understand). If your team doesn’t have that the skill set to build the tools themselves, then find it. Automation consultants and engineers can be found both internally and externally. Once the main systems are built it can be maintained by people with a lesser knowledge in programming and automation. Automation will help provide consistency, reduce waste and allow your analysts to focus on more important work. Fraud Detection Fraud detection and adjudication are necessary practices of insurance providers. This is because healthcare fraud costs billions of a dollars a year. it comes from patients and also healthcare providers that charge millions of dollars of upcoded and wasteful procedure purely to bolster their bottom line. In fact, there are even consultants and websites that specialize in helping healthcare providers bill creatively and max out their claims. These practices don’t increase costs for the insurance provider, they increase cost for the patients at the end of the day. Insurance providers have to pay analysts to spend the time to manually go through claims to look for possible patterns of abuse. If you notice, the same problem in billing is occurring here. The problem is the manual step. Healthcare billing and claims processing have gotten too big to effectively handle manually. The solution of manually processing claims just don’t scale. This is where the tools of automation, big data, data warehousing, and analytics work well. They allow a specialist to create systems to munge the data effectively and scale even as the data grows. Currently, most insurance providers have teams for fraud detection but they often can only go through a small percentage of claims( even billion dollar companies usually only go through about 5–10% of claims manually). Sometimes they will even hire outside firms to again manually go through and look for low hanging easy fruit. For instance, billing consultants found that 78% of 99215 codes in Wisconsin (highest level established patient office visit) were incorrectly used. This is a very easy issue to spot and it becomes very expensive very fast. All of this is usually limited to manual processes. Getting data pulls from databases into excel and then slicing and dicing the output. The beauty of automated methods is that they can quickly reduce the number of claims required to manually process. What is even better, once you have created a system, you can replicate the results at a regular cadence. Even as analysts leave and new hires take their place, it is much easier to train them to understand the results of a system that works rather than have them have to relearn what to look for in the raw data. This increases the number of claims an insurance provider adjudicates while at the same time improving the efficiency of the process. Many think you need complex machine learning algorithms to detect fraud when you first start developing models and your data isn’t even classified yet. However, when starting out, it is important to focus on cutting the number of targets claims your analysts will look at by 60,70 or 80%. This doesn’t take a complex algorithm. It requires developing basic business rules that can help sort through the false positives. After properly managing claims and tracking which ones are fraud and which aren’t then it is easier to develop a machine learning model because your data is classified(fraud, not fraud). Fraud detection and adjudication is a slow and costly process for many healthcare providers. Automation and well-executed analytics offer thousands to hundreds of thousands of dollars of saving. Third Parties And Contract Management One of the ways to get some systems automated and integrated is to use third-parties. Hospitals, insurance providers and other healthcare institutions are not tech companies. They don’t focus on developing technical tools to help in their day to day operations and this is ok! What is not ok is constantly signing contracts with new third parties for redundant features. For instance, paying for multiple data visualization tools is pointless like Tableau, Quilk, and OBIEE all provide report features and yet some companies use all three and more. Similar things can be said about relying on multiple financial systems. Having redundant third-party contracts causes several issues. One, and most obvious is uncontrolled operational costs. Besides the upfront costs of signing multiple contracts it also requires more employee to manage the systems, manage the contracts and deal with the billing. These contracts are also difficult to break without paying a hefty fee for breaking the contract early. Thus, it is very important to reduce redundancy because the costs will be more than the cost of the contract. The second issues are that these systems don’t have easy ways to access the data behind them. Lacking access to your own data is a problem. It is difficult to make good financial decisions if you don’t know what is happening in your institution. When you rely on third-party software, you need to be aware of the terms of your contracts and the features of the software. Otherwise, your expenses will grow from nowhere and you won’t be able to make good decisions. Administrative costs like billing, fraud detection, and contract management are driving up health care costs for insurance providers, healthcare systems and patients. With a little bit of automation and process management, many of these costs can start to be mitigated. Automating processes like billing and financial analysis can be done with a combination of SQL and python. Contract management requires a combination of process improvement and analysis of the previous contracts. In the end, it can all lead to scalable cost savings that can occur systematically and don’t constantly need manual intervention. Does your team need help automating your data systems and analytics? Then please contact us today! Read more about data science, healthcare analytics and automation below: How To Use R To Develop Predictive Models Web scraping With Google Sheets What is A Decision Tree How Algorithms Can Become Unethical and Biased How To Develop Robust Algorithms 4 Must Have Skills For Data Scientists
0 Comments
In 2018 it is estimated the US spent 3.5 trillion dollars on healthcare. A cost that is growing at nearly 5% a year. Rapidly growing pharmaceutical and administrative costs are just some of the driving factors behind this growth. This leaves the question, how are we going to reduce healthcare costs? We can’t ignore the problem. Otherwise, we will be spending way more than $15 for a Tylenol in 20 years. The first part starts with recognizing some of the largest problems in healthcare. We pointed these out in an infographic here(Fraud, waste, and administrative costs). There are nearly a trillion dollars of spending that could be improved if the correct processes and policies are put into place. However, simply knowing what the problems are is not enough. We need to know where and why they are occurring. This brings us to our first step in starting a healthcare analytics project. Develop Accessible and Accurate Data Systems In order to find out where the problems are, you need data that accurately represents what is occurring in your hospital/healthcare system. Relying purely on anecdotal evidence is no longer sufficient. It’s great to start and can guide data scientists and engineers to develop to correct data warehouses and dashboards. All of which requires quality data that is accessible to the correct users. Developing a data system (ETLs, data warehouses, etc) that is both accurate and is accessible to the right people is a necessary step in confirming what the biggest cost-saving opportunities are. This means thinking about what data you have, what data you need, who needs to access it and what questions are you hoping to answer. Not all of these can be answered right away. So it would be unwise to start by developing data warehouses that heavily adulterated from their original formats. That is why most data teams develop systems with the same basic pattern(example below). You start with raw flat files that get ingested into a raw database. That data is run through a QA process just to double check that the data is clean and makes sense. One the data is QAed it is processed into a set of staging tables with minimal business logic added. Typically, this is more of a mapping phase. Anything that needs to be standardized is, data that is duplicated is cleaned up and data normalization may occur. The goal here is to limit the amount of business logic and focus more on creating an accurate picture of what the data represents. Whether this is procedures done on patients, supplies used by a hospital or computers and devices used by employees. This data is just an abstract depiction of what is occurring at the healthcare provider. Once the data is loaded successfully and again QAed (yes, QA should happen any time data has an opportunity to be adulterated) then it can be loaded into the tables that everyone that should have access to them can have access to them. This base data layer will provide your data scientist the ability to create analytical layers that sit on top and populate various aggregate and metric tables. This basic outline is tried and true. More than likely, your data warehouse team is already supporting a system similar to the one we described. Typically the problem lies in accessibility and silos. The data might be produced for various teams like finance, or IT. However, they might all exist in separate systems with different keys and structures. This makes it difficult to find value. At this point, there needs to be a much larger initiative to provide the correct accessibility as well as clear connections between the data systems. This is why using different keys for the same data across systems is a bad idea. Going into this would require a whole other set of blog posts and discussions so we will leave this with the basic data system.
Define Problems Clearly As data engineers and data scientists we often learn about the subject matter we are working on through osmosis. We have to understand what the data represents to a point. However, it is difficult for us to know all the nuances of a specific subject and to understand all the context around the problems we are trying to solve. That is why it is important to try to clearly state the questions you are trying to answer as well as provide as much context as possible. It gives data professionals a better picture of the problem they are trying to solve. The better they understand the problem, the clearer the solution becomes. Otherwise, they could spend hours going in circles or answering the wrong questions. This is actually pretty common because sometimes what a stakeholder says is understood completely differently by a data professional. Especially if they don’t know why. Then they have to come up with their own why that drives their analysis. That means they could provide the wrong support, or answers at the wrong granularity. Create Concise Metrics With all the modern verbiage floating around it can be tempting to attempt to create algorithms and metrics that require too much adulteration of the original data and the value it could offer. Metrics that are concisely stated are also more easily understood by everyone. This allows managers to make better decisions because they understand what the metrics abstractly represent. Rather than having to struggle to get how some random ratio or calculated value means they should target customer x or y. This starts with a well-defined population. Whether that population is procedures, people, transactions, etc. It represents a specific group of entities. It is not always wise to look at an entire population first. Larger populations are more likely to have confounding factors that are harder to eliminate with simpler metrics. Developing metrics focused on specific populations to start also makes it easier to notice patterns in larger groups. Review The Outcome Analyzing the outcome and trends in the metrics that are developed can help drive new insights and policies. This requires that the outcomes are actually studied. Not just when the results are initially released but on a regular cadence. Typically, the first few times the metrics are examined there can be immediate benefits as policies are changed, entities are reached out to (like in the case of fraud) and hopefully cost savings are found. After the initial release of the dashboards, there needs to be a plan on how often the results will be reviewed. To often will cause unnecessary actions to be made before the previous ones have impacted the outcomes and not often enough (or none at all) will waste both the dashboard and the time spent developing it. Make a plan with the necessary team members (usually a director or manager, some subject matter experts for context and one of the data team members). Having this mix will provide the correct amount of perspective while informing the director and data team member of any needs required from the subject matter team. The data team might need to update the metrics based on the findings Present The Results SimplyData, programming, and algorithms can all get very complicated. As data scientists and engineers we focus on the problem for so long we can begin to think everyone understands the problem as well as we do. Yet, this is not usually the case. That is why we need to take a step back from our work and attempt to view our research, models, and analytics from the perspective of a teammate who hasn’t stared at the same problem for the past three months. It can be tempting to put all the graphs and logic that were analyzed over the past three months into a presentation or report. But this is likely to confuse a stakeholder who hasn’t been involved in your healthcare analytics project. Usually, all that is required is a few key numbers to drive decisions. When you present too many numbers you risk burying the lead behind. It is important to focus on the key facts and figures first and if support is needed to provide it in a follow-up. This can be hard to do because it feels like we as engineers didn’t do a lot of work when we only show such a small amount of work. However, the key here is an impact, not showing off. Quality data and analytics helps target and track savings opportunities in healthcare. When you start with accurate data and then develop concise metrics your team has the chance to make informed decisions on policies that can positively impact patients lives and at the end of the day that should theoretically be the goal. We hope this helps you in your healthcare analytics project. Please feel free to reach out to our team if we can go in depth on any point! Our consulting team would be happy to help. To read more data science and analytic articles see the below! How To Use R To Develop Predictive Models Web scraping With Google Sheets What is A Decision Tree How Algorithms Can Become Unethical and Biased How To Develop Robust Algorithms 4 Must Have Skills For Data Scientists One of the few things democrats and Donald trump might agree on is reducing pharmaceutical costs. Why? Healthcare costs continue to rise every year. Increasing pharmaceutical costs, technology improvements and wasteful procedures are costing the US more every year. This increase in healthcare costs is starting to be felt by everyone. Some can’t afford it, while others struggle to keep up with their medical bills. Those of you who are younger might not understand now, but in 30 years when you need to go to the hospital in 2048 and you’re charged $200 dollars for a single Tylenol pill it might make sense. You might understand how ridiculous healthcare costs have become if we don’t start fixing problems now. Why are healthcare costs continually going up? There are really many reasons driving up medical costs. With many causes comes many solutions to improve the situation. However, let’s start by defining some of the biggest problems from a financial perspective. Below is an infographic of areas where there are massive opportunities to reduce healthcare costs. There are billions of dollars spent that provide opportunities to reduce the overall US healthcare bill. How Do You Reduce Healthcare Costs? Reducing healthcare costs starts with recognizing the biggest problems that will have the biggest impact when solved. Problems like fraud, waste and inefficient systems are some of the largest costs on the healthcare systems. How do you recognize the biggest costs? Well, that is where data comes in! When we examine the successes of some of the large corporations today like Amazon, one of the differences they have between less data driven companies is they avoid terms like “I think” without having the data to back it up. Our team will be posting on how you can reduce your healthcare costs using data in the next few days. If you have any questions on the subject matter before then, please reach out! Read More On Data Analytics Below! How To Develop A Successful Healthcare Analytics Product How To Use R To Develop Predictive Models Web scraping With Google Sheets What is A Decision Tree How Algorithms Can Become Unethical and Biased How To Develop Robust Algorithms 4 Must Have Skills For Data Scientists Our team was recently asked how data analytics and data science can be used to improve bottlenecks and patient flows in hospitals. Healthcare providers and hospitals can have very complex patient flows. Many steps can intertwine, resources have to shift in between tasks all the time, and severity of patients and new patients push the order of who needs to be seen all the time. This does not make it easy to approach process improvement in a hospital. This problem is a process problem. Something industrial engineers and six sigma practitioners love. They love looking at healthcare process problems in Excel sheets with thousands of tabs and thousands of rows of data. However, now we no longer are limited to doing our analysis and model development in Excel spreadsheets on thousands of rows of data. We now access to more complete data sets that in our experience can be in upwards of the billions of rows and more powerful computational systems we can analyze patient flows and bottlenecks much accurately and effectively. Now with these tools like SQL, R and python we can analyze these data sets quickly. It’s not just about the tools. In fact, with such powerful tools it can be tempting to try to make models and algorithms that can solve all the problems in one go. One of the big issues with this approach when looking at patient flows and bottlenecks in hospitals ( or really any problem) is it is far too general of an angle. It makes it very difficult to assess when an analysis is finished and often keeps the data scientists and analysts spinning for weeks without getting a real answer. The problem here is the scope of looking at everything is very difficult to manage and pinpoint issues. Instead of trying to attack all the processes and procedures a hospital has. It is a better idea to break down several general categories of procedures/patient flows/processes that you believe are likely to have bottle necks. This is because hospitals have so many different possible paths and processes (I am going to use the word process to describe the patient flow below) that blindly looking for some sort of bottleneck will take forever (it is like looking for fraud in healthcare, if you try to do it too generally, then it will be near impossible to find). The first step is to find out the problem areas. Without knowing what you want to target it can be very difficult to know what the solution is. In a perfect world your hospital has a database that tracks all the processes and procedures that are done. This will make it easy to develop a query or Jupyter notebook that can point out the main choke points. This will further help your team limit the amount of unnecessary work that is required. Once your team knows where the problems are, then there are low hanging fruit your teams can use to look for issues. Abnormalities Abnormalities, like inconsistent times for patient flows, whether that is specific doctor or in general can state that there is a problem. Finding these specific outliers can be quite simple. For instance, let’s say you look for outliers in times for patient flow x and you hypothesize that specific days of the week, or times of the day are more likely to have longer times for certain steps. Then you pull out those steps and analyze time at a time granularity to flag each process individually. You might find that summer’s see decrease in productivity or that during the 4th of July your ERs overflow or perhaps some much less obvious data points. One key point here, is that you come up with a theory first. Because having a clear question and hypothesis makes it much easier to look for evidence. With a clear question you know where to target further analysis. You can use a query to clean up your data and break it down to the granularity required. This might be on a hospital level, doctor level or maybe even down to the procedure level. From there you could apply a basic algorithm that is good at highlighting outliers(like a basic IQR calculation or something more complex). Once you have found outliers then it can lead to further analysis into why there were longer times or inconsistent times in specific processes. There are many plausible causes, but now you have decided on a category of procedures, hypothesized and found a plausible weak point. Having these basic steps makes it much easier to move forward. Following these steps you can repeat a similar process. Theorize why you are seeing the outliers, what might be causing it, and further research the data. This could be caused by bad processes, having too few staff during times of day when you need more people on hand (think queue theory). Once you know which steps to look at, you can start putting next steps in place such as process improvement teams who are now pinpointed towards the exact problem rather than simply sending in a team of analysts to follow around doctors and guess where the problem area is. Chokepoints Besides abnormalities, another common issue is that some processes might need the same resource. Now one way to locate these bottlenecks is based on the first point of abnormalities. As a bottleneck might be one of the problems causing the the abnormality. However, choke points might also hide themselves in the fact that the step in question always runs long and thus there is no abnormality. Instead, this analysis will require asking a simple question. Are there steps in patient flows that overlap and seem to take a long time or at least longer than expected. Certain steps might take a long time, like certain labs that take a while to run. There are others that shouldn’t. Analyzing these steps could lead to hospitals putting in new suites or hiring new specialists to deal with the heavy load in certain areas. Conclusion Improving patient flows is an important step to reducing patient costs and improving their satisfaction. By reducing the time they spend in the hospital and healthcare system you reduce the amount of hours required for staff to take care of them. This should help reduce the overall costs. Our team always looks as it as a reduction to the patient even though it should also in turn be a reduction in cost to the healthcare system. From our perspective, anything hospitals, insurance providers and consultants can do to help reduce healthcare costs in our current system need to be done. How To Develop A Successful Healthcare Analytics Product The Problems With Machine Learning In Healthcare How To Use R To Develop Predictive Models Web scraping With Google Sheets What is A Decision Tree How Algorithms Can Become Unethical and Biased How To Develop Robust Algorithms 4 Must Have Skills For Data Scientists |
Our TeamWe are a team of data scientists and network engineers who want to help your functional teams reach their full potential! Archives
November 2019
Categories
All
|