This page is moving to a new website.
"I am working on a prediction model to help with diagnosis. In this particular area I need a model that has the highest possible sensitivity (low specificity is not a problem)."
One obvious comment is that you can achieve a sensitivity of 100% if you don't mind a specificity of 0%. So when you say "low specificity is not a problem" that statement is only partially true. What you mean to say is that false negatives are far more serious than false positives. How much more serious, though. Five times? Ten times? Once you've decided the relative costs of false negatives and false positives, the rest is easy. Just calculate the cost of a test as
r*(# false positives) + 1*(# of false negatives) =
r* prev*(1-Sens) + 1* (1-prev)*(1-Spec)
where r is the ratio of the cost of false positives to false negatives and prev is the prevalance of the disease in your population. Choose the cut-off that minimizes the total cost. It may turn out that the cut-off selected effectively sets sensitivity to 100% and specificity to 0%. In that case, you have learned that the most cost effective choice is to skip the diagnostic test and treat everyone coming into the clinic as a positive result.
You'll see this in some health care settings such as the prophylatic use of antibiotics when it is unclear whether the infection is bacterial or viral. There is a cost to a false positive (unnecessary treatment) but the cost of a false negative (leaving a bacterial infection untreated) might be so much larger that you're willing to endure a bunch of unnecessary antibiotic use to prevent a few untreated bacterial infections.
This depends a lot on prevalence, of course. You would never start passing out antibiotics on a street corner, but it might be appropriate if your clinic is seeing a lot of bacterial infections and these lead to very bad outcomes if treatment is delayed.
Until you calculate the ratio described above, all of the statistics in the world won't help you answer your question.