Rec Text vs. Guideline Text

Another line of inquiry that may prove useful is whether the relevance of a recommendation can better be explored through analysis of the recommendation text relative to the guideline text. Based on a vector representation of each recommendation in a guideline, what role does similarity play? If recommendation words represent a separate space from the L words of the complete Summary, should they have a separate recommendation basis in order to accurately compare the recommendations or do they need to be in a space that includes the L words? Is a recommendation similar to the other recommendations in the guideline or is it dissimilar? Is it indirectly related or is it central to the guideline text? How can we better rank the recommendations based on this assessment? Do recommendation strength and quality of evidence support this ranking?

Results

A preliminary, (but time consuming), production and assessment of results for categorizing guidelines has not been promising. Cross categorization initially led to dispersion in the end results. We removed these overlapping guidelines and rebuilt the model with only uniquely categorized guidelines. We also utilized the complete set of summaries. Unfortunately, the results remained poorly correlated. A negative result, but lessons learned.

FastText Redux

Looking at the complete guideline as a source rather than a recommendation, we have begun an implementation of the fasttext algorithm, this time using the categories from the Guideline Central Summaries. A first pass will utilize 20 guidelines per category so that the distribution will be equivalent. Our goal is to correctly predict the category of subsequent guidelines.