Community Medicine Question Quality Review
Executive Summary
This review covers a candidate sample of 100 validated non-gold questions (60 generic, 40 risky) drawn from a pool of 10,989 Community Medicine items. The sample was evaluated against 8 benchmark questions and 12 recent PYQs as the quality bar.
The single most striking finding in this sample is the catastrophic Blooms-level collapse. The candidate distribution shows 56% of items at Blooms Level 1 and 37% at Level 2, with only 7 items at Levels 3–5 combined. The benchmark set, by contrast, places every single item at Blooms Level 3 or above. This is not a marginal gap — it represents a structural failure of the candidate pool to produce the kind of applied, reasoning-based questions that Indian PG examinations (INICET, NEET-PG, UPSC-CMS) consistently demand.
Beyond the Blooms problem, the reviewed set contains multiple distinct quality failure modes: factually unsafe or wrong-key items, broken image-dependent questions, severe subject contamination (psychiatry and clinical medicine questions filed under Community Medicine topics), a large volume of pure-recall trivia with no discriminatory value, and several items whose options are so poorly constructed that the correct answer is transparent without any subject knowledge.
The overall disposition recommendation for this sample is: approximately 35–40% should be disabled outright, another 25–30% require substantive fixes before use, and only 30–35% are serviceable as-is — and even those serviceable items are predominantly low-difficulty recall questions that add limited value to a high-stakes PG preparation bank.
What Good Looks Like
The benchmark and PYQ sets establish a clear and consistent standard. Every benchmark item (e.g., dadd54bf, 9cd5b0d1, f79f6af0) presents a clinical or research scenario with numerical data, asks the candidate to perform a calculation or apply a concept, and uses distractors that represent plausible errors in reasoning rather than obviously wrong statements. The correct answer cannot be identified without actually working through the problem.
The best PYQ items follow the same logic in a slightly different format. cad4c6ad (syndromic surveillance at Kumbh Mela) grounds a conceptual question in a real-world public health scenario. 38fc00c2 (paired t-test in a PHC dietary intervention study) gives a concrete study design and asks for the appropriate statistical test. 69dd9e4d (Anaemia Mukt Bharath IFA dosing) tests a specific, updatable, program-level fact that requires knowing the current national guideline — not just a generic principle.
The defining features of good items in this subject are:
- Scenario or data first: the stem provides enough context that the question tests application, not just recall.
- Distractors that represent real misconceptions: wrong options are things a partially-prepared candidate might genuinely believe.
- Single defensible correct answer: the key is unambiguous and sourced to a current, authoritative reference.
- Appropriate Blooms level: the majority of high-value items sit at Blooms 3 (application) or above.
- Subject integrity: the question belongs unambiguously in Community Medicine, not in a clinical specialty.
Almost none of these features are consistently present in the candidate sample.
Main Issue Categories
1. Wrong Key or Factually Unsafe
Why this pattern is bad
A wrong key is the most serious quality failure possible. A candidate who studies correctly will be penalized, and a candidate who has learned an error will be rewarded. In a subject like Community Medicine where national program guidelines change (TB treatment regimens, BMW rules, immunization schedules), outdated or incorrect keys are a recurring hazard.
How it shows up
In this sample, wrong-key risk appears in three sub-patterns: (a) factually incorrect correct answers, (b) outdated guidelines presented as current, and (c) ambiguous stems where more than one option is defensible.
Example question IDs and explanations
d573c154 — "Waste Sharps should be disposed in?" — Key: Yellow bag.
This is factually incorrect under the Biomedical Waste Management Rules 2016 (amended 2018). Waste sharps (used, contaminated) are to be disposed in a white translucent puncture-proof container, not a yellow bag. Yellow bags are for human anatomical waste, animal waste, soiled waste, and expired medicines. This item has been in daily plan templates and is actively teaching the wrong rule to candidates.
Disposition: Disable immediately.
6f8c4e60 — "Which of the following vaccines is NOT included in the National Immunization Schedule?" — Key: Hepatitis B vaccine.
Hepatitis B vaccine has been part of India's Universal Immunization Programme since 2002 and is firmly in the National Immunization Schedule. The key is factually wrong. The question appears to be based on an outdated schedule or a misread source.
Disposition: Disable immediately.
3f64e7a8 — "What are the WHO criteria for the diagnosis of diabetes based on fasting venous blood sugar levels?" — Key: 120 to 180 mg/100ml.
Current WHO/ADA criteria define diabetes as fasting plasma glucose ≥126 mg/dL (7.0 mmol/L). The range "120–180 mg/100ml" does not correspond to any recognized WHO diagnostic threshold. The correct answer is not among the options as stated. This item is both factually wrong and has no defensible key.
Disposition: Disable immediately.
4f907973 — "The highest percentage of essential fatty acid is found in which of the following?" — Key: Corn oil.
Sunflower seed oil contains approximately 65–70% linoleic acid (the primary essential fatty acid), which is higher than corn oil (~55–60%). The key is contestable and likely wrong depending on the reference edition used. This is a classic example of a nutrition trivia item where the answer has shifted across textbook editions.
Disposition: Disable — the concept is low-yield and the key is unreliable.
93e03b09 — "Which of the following represents the standard error of proportions?" — Key: "Standard error of proportions."
The stem asks which option represents the formula, but no formula image is present. The options are four text labels ("Standard error of proportions," "Standard error of means," etc.). The correct answer is literally the same phrase as the question. This item has no functional content — it is a broken image-dependent question masquerading as a conceptual one (see also Category 3 below).
Disposition: Disable.
f75d4b5f — "A pandemic of H1N1 is suspected when?" — Key: "Community-level outbreaks in at least two WHO regions."
WHO pandemic phases for influenza are defined by sustained community-level transmission in at least two WHO regions (Phase 6). The key is broadly correct, but option A ("Sustained human-to-human transmission in multiple WHO regions") is also defensible and arguably more precise. The stem and distractors are not sufficiently differentiated to make this unambiguous.
Disposition: Fix — rewrite distractors to clearly distinguish Phase 5 from Phase 6 criteria.
4fb7d0ab — "What is an indicator of fecal contamination of water?" — Key: Enterococcus.
The standard and universally accepted indicator of fecal contamination of water is E. coli (or thermotolerant coliforms). Enterococcus is used as a supplementary indicator, particularly for recreational water and in some WHO guidelines, but it is not the primary or best-established indicator. Marking E. coli as wrong and Enterococcus as correct is misleading and likely to confuse well-prepared candidates.
Disposition: Disable or fix with a rewritten stem that specifically asks about recreational water or a secondary indicator context.
2. Wrong Subject or Wrong Topic Placement
Why this pattern is bad
Subject contamination wastes slots in Community Medicine question pools, misleads candidates about what to expect in the subject, and can cause scoring anomalies when questions are used in subject-specific tests. It also signals that the tagging and topic-assignment process lacks adequate subject-matter review.
How it shows up
In this sample, the contamination is severe and falls into two sub-types: (a) clinical medicine questions filed under Community Medicine topics, and (b) basic science questions (zoology, chemistry) filed under Community Medicine topics.
Example question IDs and explanations
db73f5ae — Topic: Environmental Health. Stem: "An 18-year-old girl presents with breathlessness, anxiety, palpitations, and tingling around the lips. There is a similar history in the past. What is the most likely diagnosis?" — Key: Panic attack.
This is a Psychiatry/Medicine clinical vignette. It has no connection to Environmental Health or Community Medicine. The clinical presentation described (hyperventilation syndrome / panic attack) is a standard Psychiatry question. It should not appear in any Community Medicine test.
Disposition: Disable from Community Medicine; reassign to Psychiatry if the item has merit there.
61f3cedd — Topic: Environmental Health. Stem: "Which of the following is NOT an objective of the National Mental Health Program?"
The National Mental Health Programme is a Community Medicine / Health Policy topic, but it has been filed under Environmental Health, which is incorrect. This is a topic placement error within the subject.
Disposition: Fix — reassign to Health Policy and Management or Public Health Administration.
de7767da — Topic: Non-Communicable Diseases. Stem: "Which arthropod possesses four pairs of legs?" — Key: Arachnida.
This is a basic zoology / entomology question with no clinical or public health application as written. It belongs in a Parasitology or Microbiology bank, not Non-Communicable Diseases in Community Medicine.
Disposition: Disable from Community Medicine.
060de3d5 — Topic: Non-Communicable Diseases. Stem: "What is the standard treatment for a patient with recently diagnosed sputum-positive pulmonary tuberculosis?" — Key: RHZE.
TB treatment regimens are primarily a Pharmacology / Medicine / Respiratory Medicine question. While TB control is a Community Medicine topic, the specific drug regimen question as written tests clinical pharmacology, not public health principles. Furthermore, with the transition from RNTCP to NTEP and the shift to daily regimens, this item needs to be verified against current NTEP guidelines before use.
Disposition: Fix — if retained in Community Medicine, reframe around NTEP program structure, not drug names.
9cd574d5 — Topic: Infectious Diseases. Stem: "In RNTCP, one Tuberculosis Unit covers how much population?" — Key: 500,000.
RNTCP has been replaced by NTEP (National Tuberculosis Elimination Programme). The program name in the stem is outdated. The population coverage figure may also have been revised. This is a factual currency problem as well as a topic placement issue (it belongs under Health Policy / National Programs, not Infectious Diseases).
Disposition: Fix — update to NTEP terminology and verify current figures.
3. Broken Delivery (Missing Image, Malformed Options, Incomplete Stem)
Why this pattern is bad
Image-dependent questions without the image are unanswerable by any reasoning process — the candidate must guess. Malformed options (where the option text is the same as the question, or where options are not parallel) destroy the discriminatory function of the item. These questions actively harm the test-taking experience and cannot be salvaged without the missing asset.
How it shows up
Two distinct sub-patterns appear: (a) questions that explicitly reference an image that is not present in the stem, and (b) questions where the option structure is so broken that the item has no functional content.
Example question IDs and explanations
cbd8585f — "Which is correct about the Vaccine Vial monitor shown in the image? (Recent NEET Pattern 2016-17)"
The stem explicitly says "shown in the image" but no image is present in the data. The VVM question is a legitimate and high-yield Community Medicine concept, but without the image showing the VVM at a specific stage (inner square darker than outer ring), the question is unanswerable. Candidates in daily plan templates are currently encountering this broken item.
Disposition: Fix — attach the correct VVM image (inner square darker than outer ring = discard point). The concept and key are sound; only the image is missing.
e654d803 — "The following symbol is used to depict awareness for which disease?" — Key: Breast cancer.
The stem says "the following symbol" but no symbol image is present. This is a pure image-dependent question (pink ribbon for breast cancer awareness) that cannot be answered without the image.
Disposition: Fix — attach the pink ribbon image. The concept is low-yield trivia but the item is at least factually correct.
93e03b09 — "Which of the following represents the standard error of proportions?"
As noted in Category 1, the options are four text labels with no formula content. The original question almost certainly had a formula image or LaTeX expression that was lost. As it stands, the correct answer is tautologically the option that says "Standard error of proportions."
Disposition: Disable — the formula image is missing and the item is non-functional.
0d7db392 — "The National AIDS Control Organization provides prepacked colour-coded STI/RTI kits... Consider the following pairs: Pair No Colour codes STI/RTI conditions 1 Red Urethral discharge 2 Green Vaginitis 3 White Inguinal bubo"
The formatting of the table within the stem is broken — it appears as a run-on text string rather than a structured table. While the content is recoverable, the presentation is confusing and the correct answer depends on knowing that Red = urethral discharge (correct), Green = vaginal discharge (correct), and White = inguinal bubo (incorrect — White is for genital ulcer disease; inguinal bubo uses a different kit). The answer "only two pairs" is correct if Green = vaginitis is accepted, but the formatting failure makes this item unreliable.
Disposition: Fix — reformat as a proper table and verify the NACO colour-coding against current guidelines.
4. Low-Value But Correct (Too Simple, Low-Yield, Trivia-Heavy, Weak Exam Relevance)
Why this pattern is bad
This is the dominant quality problem in the reviewed sample. A question can be factually correct and still be worthless for PG preparation if it tests pure memorization of a single isolated fact at Blooms Level 1, requires no reasoning, and has distractors that no informed candidate would choose. Such items inflate the question count without improving preparation quality, and they crowd out higher-value items in daily plans and mock tests.
The benchmark set demonstrates that even "easy" items (all rated difficulty 1) can be placed at Blooms Level 3 by embedding the concept in a scenario. The candidate sample has not done this.
How it shows up
This pattern appears in approximately 50–55 of the 100 candidate items. The most egregious sub-types are: (a) single-fact recall with no scenario, (b) "who/what/when" trivia questions, (c) definition questions where the definition is the answer, and (d) questions where the correct answer is obvious from the stem without any subject knowledge.
Example question IDs and explanations
c824414c — "Which of the following diseases is spread by mosquitoes?" — Key: Dengue.
Trypanosomiasis (tsetse fly), Kala azar (sandfly), and Listeriosis (food-borne) are the distractors. Any candidate who has completed Class 10 biology can answer this. It has zero discriminatory value for PG-level assessment.
Disposition: Disable.
b5776883 — "World Tuberculosis Day is celebrated on which date?" — Key: 24th March.
Calendar trivia. No reasoning required. Not a concept that appears in INICET or NEET-PG in this bare form.
Disposition: Disable.
ad478951 — "Who is considered the Father of Epidemiology?" — Key: John Snow.
Historical trivia at its most basic. The concept of John Snow's contribution to epidemiology is worth teaching, but not in this format.
Disposition: Disable — if the concept is to be tested, reframe around what Snow's Broad Street pump investigation demonstrated methodologically.
633cb8e2 — "White death is" — Key: TB.
Colloquial nickname recall. No clinical or public health reasoning involved.
Disposition: Disable.
4f8b544a — "Which WHO STEPS program is used for non-communicable diseases?" — Key: Non-Communicable diseases.
The correct answer is literally the same phrase as the question. The distractors ("Communicable diseases," "Immunodeficient diseases," "Autoimmune diseases") are not plausible. This item has no discriminatory function.
Disposition: Disable.
692db03f — "What is the range of the correlation coefficient?" — Key: -1.0 to +1.0.
Pure mathematical definition recall. No application. Any candidate who has seen a statistics textbook knows this.
Disposition: Disable — the concept of correlation is worth testing, but through a scenario (e.g., interpreting a given r value in a study context).
7a5864d0 — "What is the percentage of the para-para isomer in DDT?" — Key: 70–80%.
This is the definition of low-yield trivia. The para-para isomer percentage of DDT has no clinical, programmatic, or epidemiological application in any Indian PG examination context.
Disposition: Disable.
af7346f0 — "What is the quarantine period for yellow fever?" — Key: 6 days.
Isolated numerical fact with no scenario. While quarantine periods are testable, this format adds nothing beyond rote memorization.
Disposition: Disable — if tested, embed in a scenario about a traveler arriving from an endemic country.
70ba6703 — "The Finance Commission derives its authority from which of the following?" — Key: Constitution of India.
This is a Polity / Civics question, not Community Medicine. Even if one argues it belongs under Public Health Administration, it tests general knowledge of Indian governance, not public health principles.
Disposition: Disable.
8356549e — "Goitre is prevalent at higher altitudes. This is an example of which type of association?" — Key: Indirect association.
The concept of indirect association (altitude → low iodine in soil → goitre) is legitimate, but the question is so stripped of context that it tests only the label, not the reasoning. A better version would present the data and ask the candidate to classify the association and explain why.
Disposition: Fix — add a brief epidemiological scenario with data before asking for classification.
1d47243c — "Which of the following is a common cause of rural waterborne diseases in India?" — Key: Contaminated water sources.
The correct answer is tautological — waterborne diseases are caused by contaminated water. The distractors (poor sanitation, lack of hygiene education, inadequate water treatment) are all contributing factors and are not clearly wrong. This item has no defensible single correct answer and is trivially obvious.
Disposition: Disable.
39b45c5e — "What percentage of people living in developing countries lack access to basic sanitation facilities?" — Key: 40–50%.
The options are not parallel (one says "60% have limited access to healthcare," another says "70% are at risk of waterborne diseases") — they are not all answering the same question. The item is also testing a statistic that is not sourced to a specific WHO report year and is likely outdated.
Disposition: Disable.
5. Repetitive or Duplicative Coverage
Why this pattern is bad
When multiple items in the same pool test the same narrow fact in the same format, they dilute the value of each other, inflate apparent topic coverage, and reduce the diversity of a mock test. In this sample, several concept clusters appear to be over-represented with near-identical items.
How it shows up
The reviewed set contains multiple items testing the same sub-concept of case-control studies, sensitivity/specificity definitions, and maternal death definitions — all at Blooms Level 1–2 and all in bare-recall format.
Example question IDs and explanations
8ab33fcd and 5beedda2 — Both test case-control study identification. 8ab33fcd asks "A retrospective study is also known as which of the following?" (Key: Case-control study). 5beedda2 asks "Which study design is most effective for investigating rare adverse effects of a drug?" (Key: Case-control study). The second item is better (it has a rationale-based stem), but both are in the pool. The first adds nothing the second does not cover.
Disposition: Disable 8ab33fcd; keep 5beedda2.
2c7d5b3a and 93ec4d62 — Both test sensitivity. 2c7d5b3a asks for the definition of sensitivity directly. 93ec4d62 asks when a test becomes more sensitive (fewer false negatives). Both are Blooms Level 2 at best. Neither approaches the benchmark standard of embedding sensitivity in a calculation scenario.
Disposition: Both are low-value; if one must be kept, keep 93ec4d62 as it requires slightly more reasoning. Disable 2c7d5b3a. Ideally, replace both with a scenario-based sensitivity/specificity calculation item.
6eae356d and f7715fe4 — Both test the maternal death definition. 6eae356d asks for the time period (42 days). f7715fe4 asks which statement about MMR is not true, with one option being "Denominator includes still births and abortions." Both are Blooms Level 1. The second is slightly better because it tests a common misconception about the denominator, but neither is scenario-based.
Disposition: Disable 6eae356d; conditionally keep f7715fe4 but flag for upgrade to a scenario format.
dad4a1d2 — "What does relative risk represent?" — Key: Incidence among exposed divided by incidence among non-exposed.
This is a pure definition item. The benchmark set already contains bf118aa8 (a cohort study with actual numbers asking for RR calculation). The definition item adds nothing when a calculation item exists.
Disposition: Disable dad4a1d2.
6. Worthwhile Concept, Weak Execution (Keep the Concept, Fix the Stem/Options/Vignette)
Why this pattern is bad
These items test concepts that genuinely appear in Indian PG examinations and are worth having in the bank, but the execution — stem phrasing, option construction, or absence of a scenario — prevents them from functioning at the required cognitive level. Discarding the concept entirely would leave a gap; the right action is a targeted rewrite.
How it shows up
The most common execution failures in this sub-group are: (a) a bare-recall stem that could be converted to a scenario with minimal effort, (b) distractors that are obviously wrong rather than plausibly wrong, and (c) "all of the above" or "none of the above" options that signal the correct answer.
Example question IDs and explanations
11c92a3a — "All of the following are true for a screening test except?" — Key: "Forms the basis for treatment."
The concept (screening vs. diagnostic testing) is high-yield and appears in PYQs. However, the stem is a bare "except" format with no scenario, and option B ("Test results are arbitrary") is not a standard teaching point about screening tests — it is an odd distractor. The item should be rewritten as a scenario: a physician uses a positive screening test result to start treatment without confirmatory testing — what is the error?
Disposition: Fix — rewrite as a clinical scenario testing the distinction between screening and diagnostic testing.
ebfbd821 — "Which of the following is/are objectives of the Roll Back Malaria program?" — Key: "All of the above."
The "All of the above" key is a well-known test-construction flaw — it signals the correct answer to test-wise candidates regardless of content knowledge. The Roll Back Malaria program objectives are legitimate Community Medicine content. The item needs to be rewritten with four specific, differentiated options where one is incorrect.
Disposition: Fix — eliminate "All of the above" option; replace with a specific incorrect objective (e.g., "Eradication of malaria within 5 years") as the distractor.
0ba75c79 — "In a developing country, the prevalence of diabetes mellitus is increasing at an annual rate of 1.8%. Using epidemiological principles similar to the Rule of 70, approximately how many years will it take for the diabetes prevalence to double?"
The concept (Rule of 70 / doubling time) is legitimate and has appeared in NEET-PG. However, the stem appends "and what are the primary healthcare planning implications of this growth rate?" — a question that is never answered in the options, which only give numerical ranges. The second part of the stem is vestigial and confusing. The correct answer (35–46 years, from 70/1.8 ≈ 38.9 years) is mathematically sound, but the option ranges are oddly wide and the stem needs cleaning.
Disposition: Fix — remove the planning implications clause from the stem; tighten the option ranges to make the calculation more discriminating.
da03188d — "What is the primary limitation of conducting a one-day census of inpatients in a mental hospital?"
The concept (point prevalence vs. period prevalence; length-biased sampling in cross-sectional surveys of inpatients) is genuinely high-yield for Biostatistics/Epidemiology. The correct answer ("Provides a snapshot of the current patient demographic but lacks longitudinal data") is defensible but imprecise — the more specific and examinable limitation is length-biased sampling (long-stay patients are over-represented). The item should be rewritten to make length-biased sampling the explicit concept being tested.
Disposition: Fix — rewrite the correct option to specifically name length-biased sampling or over-representation of chronic cases.
8b6b6ced — "Following are examples of human 'dead end' diseases except:" — Key: Bubonic plague.
The concept of dead-end hosts is high-yield and this item has appeared in NEET-PG 2012. The key is correct (bubonic plague can spread person-to-person via flea bite, unlike hydatid disease or JE where humans are dead-end hosts). However, the distractor "Leishmaniasis" is contestable — in visceral leishmaniasis, humans can serve as a reservoir for anthroponotic transmission in some settings. The item needs a brief explanatory note or the distractor should be replaced with a cleaner dead-end example.
Disposition: Fix — replace Leishmaniasis with a cleaner dead-end host example (e.g., Trichinella) or add a qualifier to the stem.
b8ee1291 — "Which of the following statements regarding the Red Cross sign is FALSE?" — Key: "It can be used by doctors and ambulances."
The concept (Geneva Convention protections for the Red Cross emblem) is legitimate for Public Health Administration. However, the correct answer is counterintuitive and the stem does not provide enough context for a candidate to reason through it — the Red Cross emblem is specifically protected under the Geneva Conventions and cannot be used by civilian doctors or ambulances outside of armed conflict contexts. The item needs a brief contextual note in the stem to make the reasoning transparent.
Disposition: Fix — add context about the Geneva Convention in the stem.
12b91e70 — "Which of the following refers to the tendency of an individual's relative position within a distribution (e.g., BP levels) to remain consistent over time..." — Key: Tracking of blood pressure.
The concept (tracking) is legitimate and has appeared in FMGE 2025. The stem is extremely long and essentially defines the answer, making this a definition-matching exercise rather than an application question. The item should be shortened and reframed as a scenario (e.g., a child in the 90th percentile for BP at age 8 is found to be in the 88th percentile at age 18 — what phenomenon does this illustrate?).
Disposition: Fix — shorten stem and convert to scenario format.
30088491 — "A pharmaceutical company develops a new antihypertensive drug... Neither the patients nor the treating physicians are aware of which patients are in which group. What type of study is this?" — Key: A randomized controlled clinical trial.
The concept is correct and the scenario format is appropriate. However, the correct answer should arguably be "double-blind randomized controlled trial" — the double-blinding is the most specific and notable feature of the design described. "A randomized controlled clinical trial" is technically correct but less precise than the scenario warrants. The distractors are also weak: "A prospective study" is not wrong (an RCT is prospective), making it a partially correct distractor.
Disposition: Fix — change the correct option to "Double-blind randomized controlled trial" and replace "A prospective study" with a more clearly incorrect distractor.
Prioritization
The following priority order is recommended for the content operations team, based on urgency and impact:
Priority 1 — Immediate action (wrong key / factually unsafe / broken delivery) These items are actively causing harm if served to candidates. They should be pulled from all daily plans and mock tests before any other work begins.
Items: d573c154, 6f8c4e60, 3f64e7a8, 93e03b09, 4fb7d0ab, cbd8585f, e654d803, db73f5ae
Priority 2 — Disable (low-value trivia, subject contamination, tautological items) These items are not causing active harm but are consuming pool slots and degrading test quality. They should be disabled in the next content cycle.
Items: c824414c, b5776883, ad478951, 633cb8e2, 4f8b544a, 692db03f, 7a5864d0, af7346f0, 70ba6703, 1d47243c, 39b45c5e, de7767da, 8ab33fcd, dad4a1d2, 6eae356d, 4f907973, f7d11283, 9f762403 (verify referral complex attribution), a1a1fa14, 92b5bff6
Priority 3 — Fix (worthwhile concept, weak execution) These items have salvageable concepts and should be rewritten before re-enabling. They should not be served in their current form.
Items: 11c92a3a, ebfbd821, 0ba75c79, da03188d, 8b6b6ced, b8ee1291, 12b91e70, 30088491, f75d4b5f, 0d7db392, 61f3cedd, 9cd574d5, 060de3d5, 8356549e
Priority 4 — Keep with monitoring Items that are factually correct, in-subject, and at least minimally functional, though most are low-Blooms. These can remain in the pool but should be deprioritized in high-stakes test templates in favor of higher-Blooms items.
Items: 5beedda2, 42dc1bb9, 5ae88d61, 8ef5035a, 729930b8, 2c7d5b3a (marginal), d121ba81, 5bc2c1ea, 9b1913d9, 0604df6c, 5a7a50e6, b639647c, f83949af
Example Keep / Fix / Disable Calls
| Question ID | Topic | Disposition | Reason |
|---|---|---|---|
d573c154 |
Environmental Health | DISABLE | Wrong key: sharps go in white puncture-proof container, not yellow bag per BMW Rules 2016 |
6f8c4e60 |
Immunization | DISABLE | Wrong key: Hepatitis B is in the National Immunization Schedule |
3f64e7a8 |
Non-Communicable Diseases | DISABLE | Wrong key: no recognized WHO diabetes threshold matches "120–180 mg/100ml" |
93e03b09 |
Biostatistics | DISABLE | Broken delivery: formula image missing; options are non-functional text labels |
4fb7d0ab |
Environmental Health | DISABLE | Wrong key: E. coli, not Enterococcus, is the primary fecal contamination indicator |
db73f5ae |
Environmental Health | DISABLE | Wrong subject: clinical psychiatry vignette (panic attack) filed under Environmental Health |
de7767da |
Non-Communicable Diseases | DISABLE | Wrong subject: basic zoology question (arthropod leg count) filed under NCDs |
c824414c |
Infectious Diseases | DISABLE | Low-value trivia: mosquito-borne disease identification at Class 10 level |
b5776883 |
Infectious Diseases | DISABLE | Low-value trivia: World TB Day date, no reasoning required |
4f8b544a |
Non-Communicable Diseases | DISABLE | Tautological: correct answer is the same phrase as the question |
1d47243c |
Rural Health | DISABLE | Tautological: waterborne diseases caused by contaminated water; no discriminatory value |
cbd8585f |
Immunization | FIX | Broken delivery: VVM concept is high-yield; attach correct VVM image showing discard point |
ebfbd821 |
Infectious Diseases | FIX | "All of the above" key; Roll Back Malaria objectives are legitimate content; rewrite options |
11c92a3a |
Epidemiology | FIX | Screening vs. diagnostic testing is high-yield; convert bare "except" to clinical scenario |
30088491 |
Biostatistics | FIX | RCT scenario is good; correct option should specify double-blind; fix weak distractor |
8b6b6ced |
Epidemiology | FIX | Dead-end host concept is high-yield PYQ content; replace contestable Leishmaniasis distractor |
0ba75c79 |
Epidemiology | FIX | Rule of 70 / doubling time is legitimate; remove vestigial planning clause from stem |
5beedda2 |
Epidemiology | KEEP | Case-control for rare adverse effects: correct, in-subject, rationale-based stem |
b639647c |
Health Education | KEEP | Health education vs. propaganda distinction: correct, UPSC-CMS PYQ, Blooms 4 |
8b6b6ced |
Epidemiology | KEEP (after fix) | Dead-end host: high-yield NEET-PG PYQ concept once distractor is corrected |
d121ba81 |
Epidemiology | KEEP | Case-control study limitations: correct, in-subject, tests a genuine misconception (attributable risk) |
5a7a50e6 |
Immunization | KEEP | DPT contraindications: correct, clinically relevant, tests a specific and examinable fact |
0604df6c |
Immunization | KEEP | Tetanus communicability: correct, tests a specific and commonly confused fact |
This report covers the reviewed sample of 100 candidate questions only. Findings should not be extrapolated as universal characterizations of the full 10,989-item pool without further sampling. The wrong-key calls on d573c154, 6f8c4e60, 3f64e7a8, and 4fb7d0ab are made with high confidence based on current authoritative guidelines and should be treated as urgent.