今天跟宵寒说在写微生物教案 还跟她说了上次座谈跟学姐争论八年制的微生物到底是不是很屎的事情,说完发现那个学姐在就坐在我斜对面。。。大囧。。。 牛既然吹出去了,就先挖个坑吧

随便写点零碎的想法吧, 输入:症状组合 输出:危机判断及急救措施, 可能的诊断(以及各自需要做的检查), 每个症状对应的生理变化以及需要的处理, 诊断及分级, 根据指南的治疗措施

收集信息–>确立诊断–>分期分级–>治疗与监测
与这些并行的是危机判断及急救
最体现“思路”的应该是收集信息的过程
从症状和体征的证据到通过辅助检查影像检查寻找证据

专家系统已经做了那么多个,不知道现在再做还有没有意义呢 只是想做一个能给医生提示的小抄系统呵呵 学了好多结肠癌,还是不会想起P-J,被我遗忘的,又何止这一个,还是需要一个小抄呵 开抗生素的时候,还是会根据习惯开多啊,据说提醒系统可以减少滥用呵

要基于概率,要基于贝叶斯
我的原始数据从哪来呀
如果还能同时收集数据update各种概率就好了
然后各种参数就可以online learning了。。

美好的幻想,不知道该从何入手啊

1.现在的健康监测手环号称可以分析人处在睡眠周期的哪个阶段,是否可以做一个智能闹钟,在一个合适的时间把人叫醒,让人觉得不那么困呢
2.生物反馈的方法帮助人入睡
3.这个我爸肯定喜欢。。。。睡前听收音机,睡着了收音机自动关掉。。。
4.昨天在书上看到用心率变异性来推断心的自主神经功能,这个也可以用手环来做。自主神经神码的,听上去很高大上啊

几点comments
1.有接口拿到数据么?
2.连信号与系统都没上过,想这些有点吹牛啊。。。往寒假的wishlist再加一项。。。
3.如果我这辈子都去玩自主神经功能,我想也值,之前很多用功能性改变一下带过去的东西,都跟自主神经紊乱有关,有些说不清道不明的东西比如自身免疫病也与此有关,这其实是个够牛掰的系统,只是目前还太玄幻。。。。

1.现在的模型都是epicutaneous剃毛贴胶布致敏,贴胶布激发。
但是我需要一个打针的模型啊!
腹腔注射系统致敏可能的问题是免疫细胞不一定会归巢过来。
注射的激发也不一定能成。

2.所以。。。
先试下注射吧。。。
如果不成。。。试下直接打免疫复合物。。。

3.如果再不成
epicutaneous的问题是不知道阻断哪里的神经元
眼睛也是不知道怎眼给药

先这样吧。。。

http://www.ncbi.nlm.nih.gov/pubmed/23685721

读后感: you should be aware of the problem to be solved in your own field…and keep it in mind… and …try to find a solution elsewhere.
所以要读literature,要常去其他实验室串门子,要常参加形态各异的conference,symposium…
基础所不太好的地方就在这里。。。
加之懒得读文献,现在对于很多问题的关注只限于问题本身,这种关注也只是苟延残喘了吧。。。

在堵车的公交里, 感觉就像梗阻肠道中的一块粪石。

from http://groups.csail.mit.edu/medg/people/psz/home/Pete_MEDG_site/Research.html

EVOLUTION OF MY MEDICAL ARTIFICIAL INTELLIGENCE RESEARCH by Peter Szolovits My research interests broadly include much of biomedical informatics. Throughout my career, I have avoided a tightly focused concentration on a single topic. Instead, I have tried to define my research interests by the demands of health care and how they could be satisfied by computing approaches. My graduate work in the early 1970’s focused on development of specialized application languages to support the computing needs of various disciplines. When I joined the MIT faculty in 1974, I met a group of doctors from Tufts/New England Medical Center who were trying to elucidate the thought processes of expert doctors as they performed diagnostic and therapeutic reasoning, and to build advisory programs that could help all doctors work as well as the best. I found that the challenges of this field provided an excellent focus for my interests in artificial intelligence, knowledge representation and reasoning. I therefore made the commitment to learn some medicine (I know enough jargon to be able to play a doctor on TV!) and to understand how AI techniques can help computers “think through” complex medical decision problems. My early doctoral students made excellent contributions to such AI challenges: automated generation of explanations, qualitative modeling of physiology, meta-level control of computation, expert systems based on pathophysiologic models at different levels of detail, reasoning about individual patient preferences, and temporal reasoning. We contributed to the work that became identified as the “expert systems” boom of the 1980’s, enriching the applicable techniques.

In medical AI, my first effort in 1975 was to re-engineer a previously-developed very inefficient diagnosis program for renal (kidney) diseases in order to make experimentation with it feasible. With my colleague Steve Pauker, I also wrote what turned out to be a very influential survey paper of various AI methods of reasoning in medical diagnosis. It explains different reasoning methods adopted by some of the early medical AI programs and points out challenges in more clearly defining the semantics of the knowledge they purport to contain. At this time, I became convinced that medical practice innately relies on feedback from what has been done before, so programs that advise human practitioners must work similarly. They must repeatedly and incrementally re-assess diagnostic and therapeutic plans as time passes, underlying pathologies evolve, and therapies take effect, and thus new observations are acquired. My other insight from these early analyses was that very rarely does an interesting patient (one for whom help from a computer might be useful) suffer from a single, untreated disease. Techniques that match symptoms to generic disease descriptions are rarely adequate, no matter whether they are described by rules, templates, or frames, because they fail to account for interactions. A full pathophysiological theory could in principle represent all the interactions, say in the form of hundreds of differential equations and thousands of parameters. Alas, it cannot yield a useful clinical model because collecting the data needed to estimate all these parameters is infeasible. We need instead a set of models that lie between simple symptom-diseases associations and such differential equation models. Patil’s thesis on diagnosis of acid-base and electrolyte disorders pioneered such an approach in the early 1980’s, and Long’s clinical-level heart disease models extended it.

The growing availability of real data, which was foreseen in the 1980‘s, began to be real in the 1990’s, and threatens to inundate us in the 21st century, has made a huge change in my own work and that of my students. We (and our computers) can now learn even complex associations between observables and patient states from large corpora of labeled data. Therefore, much of our research today focuses on finding novel ways to characterize huge collections of data and to develop predictive algorithms that do a good job of anticipating changes in a patient’s state from his or her previous condition and treatments being applied. Thus far, we have been able to pick lots of low-hanging fruit by applying existing statistical and machine learning methods to the analysis of such data. However, I anticipate that new methods will also need to be developed in order to deal effectively with the great complexity of what happens to seriously ill patients. For example, at present we tend to build predictive models using a set of features of the case. These normally include aspects of the patient’s medical history, their current problems, current and recent laboratory measurements, drugs and procedures. Such features are designed to summarize the timeline over which the actual data have evolved, by computing trend lines, averages and spreads over various time periods, etc. These provide a useful, but hardly comprehensive account for how the patient’s illness has changed over time or how it has responded to previous attempts at treatment. In the 1990’s we have tried to develop Markov process and dynamic systems models of such phenomena, but the complexity of the models coupled with the computational difficulty of solving them had limited their effectiveness to very small example problems. I believe that it is time to try such more sophisticated models again, so that they can exploit innately time-dependent phenomena such as pharmacokinetics. Also, because virtually every patient’s condition shows the effects of combining perhaps multiple disease processes and therapeutic interventions, partially observable Markov decision processes can provide a more nuanced interpretation of data than simpler feature-based predictive models.

In 1974, I had estimated that by the early 1980’s most large hospitals would have switched their practice to electronic collection, storage and retrieval of their medical records. Because this has still not happened (as of 2010), I was clearly wrong. I had based my guess on a simple extrapolation of the costs of keeping paper records vs. the costs of electronic storage. I think those two cost curves did indeed cross in the early 1980’s, but I had not thought through the enormous costs (financial, institutional and human) of making such a switch. By the early 1990’s, I decided that I needed to develop a research focus on medical record systems, to better understand why it was so difficult to put them in place. This led to three different lines of work in my group, each of which still continues.

First, we realized as soon as the World Wide Web was created that this formed the most appropriate technical basis for presenting and using medical information. With our colleagues at Childrens Hospital Boston (CHB) and others in the Boston area, we formed a collaborative effort that demonstrated by 1994 the ability to view medical records from CHB anywhere on the internet (with suitable security) and by 1995 that ability to aggregate data from multiple institutions to present a longitudinal view of all data about a patient, even if collected at different hospitals and clinics. We also implemented and published consensus methods for assuring the protection of the confidentiality of patients’ clinical records.

Second, also in 1994, we outlined a vision of life-long personal health care supported by a computer system (called Guardian Angel) that kept comprehensive records, educated patients about their health, provided them decision support, and served to connect them to providers and peer groups. Although the vision is still far from being implemented, it did lead us to develop the earliest approaches to personally controlled health records, which have been influential on subsequent commercial developments such as Dossia, Google Health and Microsoft Health Vault. With the many anticipated changed in health care delivery and financing being debated in 2010, this vision is still relevant and cries out for continued research.

Third, through debates around 1993 about the propriety of adopting the social security number as a national health identifier, I felt challenged to study broader issues of patient privacy. We proposed cryptographic identification schemes that permitted aggregation of clinical data about a patient only via the participation of the patient. We demonstrated the risks of naive de-identification methods, which leave in place enough unique data about individual patients to make them re-identifiable. We also demonstrated that properly pseudonymized data could still be used effectively to support secondary uses of those data without casually revealing the identities of the patients. I also served on a National Research Council committee that reported on the poor state of protection of electronic health records in 1997 and inspired some provisions of the HIPAA privacy protections. Later, I also served on an Institute of Medicine committee that helped to define the role of institutional review boards in protecting patient confidentiality in data studies. Much remains to be done, both technically and in policy, to protect patients and encourage data sharing.

In the past decade, I have focused on developing techniques to extract codified clinical data from narrative text and speech conversations. This has proven important because of the Willie Sutton principle—that is where a lot of the data are. Practitioners are able and willing to describe clinical phenomena with great sophistication in natural language, but not in formal representations. It remains a challenge, however, to translate that narrative text to rich formal representations. Even just identifying the various ways of expressing the same facts and translating these into a terminology such as SNOMED or ICD remains difficult. Accounting for more subtle statements, such as accounts of how findings support or dispute diagnostic hypotheses, or contingent plans, seems beyond the state of the art. My colleagues and I have worked on a highly flexible language processing framework to support research in this area. We have also extended the dictionary of a popular parser with medical terminology, built pattern matching methods to find descriptions of medications and dosages in text, identified signs, symptoms, diseases, treatments, tests and their results using both pattern-based and statistical methods, identified temporal and likelihood indicators about such facts, determined some relations among facts mentioned together, etc. We have also developed two different approaches to de-identification of clinical data for research purposes, one based on dictionaries and patterns, the other on statistical machine learning techniques. These have been used to enable the re-use of large clinical datasets for research purposes in many projects.

The advent of the genomic revolution in the 1990’s promised to usher in an era of personalized medicine, where measurements of single nucleotide polymorphisms (or, eventually sequencing of individual genomes), genomic measurements of transcription, proteomic measurements of gene products, etc., could combine with clinical facts to provide highly customized diagnostic tools and methods to choose and optimize therapy for individual patients. We have contributed to this effort in the Partners Healthcare-based i2b2 project both through our natural language work and through efforts to bring together modeling techniques from statistics with those from artificial intelligence.

CURRENT PROJECTS

As of late 2010, I am focused on four ongoing research projects, although my interests continue on essentially all of the topics mentioned in my historical overview. These are described in more detail in my group’s home page, but I give brief summaries here.

Capturing Patient-Provider Encounter through Text Speech and Dialogue Processing: Create a system that captures primary medical data mentioned during an encounter between a health care provider and a patient. We use speech-to-text technology to create an approximate transcript of both sides of such a conversation, use natural language processing and machine learning methods to extract relevant clinical content from the transcripts, organize these according to medical conventions, and display the data to both provider and patient to allow them to correct mistakes made by this process. We are applying this in the Pediatric Environmental Health Clinic at Children’s Hospital Boston.

Integrating Data, Models and Reasoning in Critical Care Develop techniques to collect, interpret, analyze and disseminate multi-channel data from intensive care collated with clinical notes and other patient data. The three foci of our group’s efforts are to extract meaningful data from textual records and to build algorithms that make sense of the clinical condition of the patient.

I2B2: Informatics for Integrating Biology and the Bedside Develop a scalable informatics framework that will bridge clinical research data and the vast data banks arising from basic science research in order to better understand the genetic bases of complex diseases. This knowledge will facilitate the design of targeted therapies for individual patients with diseases having genetic origins.

SHARP 4: Secondary Use of Clinical Data As part of a national collaborative group headed by Mayo Clinic, we are building tools to make it possible to re-use clinical data for purposes other than the patient care for which they were collected. Our efforts include natural language processing to identify salient facts and relationships in narrative textual data, defining classification models that can identify specific phenotypes from patient records, and defining ontologies to organize the relevant medical knowledge needed for these tasks.