Verified packet scope

This published report is grounded in a randomized packet from a bank of 5504 questions: 60 validated generic candidates, 40 validated risky candidates, and 18 gold-reference items (6 benchmark, 12 PYQ), for 118 sampled items total.

Benchmarked against 6 benchmark questions and 12 recent PYQs.

Forensic Medicine Question Quality Review


Executive Summary

This review covers a candidate sample of 100 validated non-gold questions drawn from a pool of 5,504 Forensic Medicine items. The sample was evaluated against six benchmark questions and twelve recent PYQs as the quality bar.

The single most serious structural problem in this sample is a severe Bloom's level skew: 64% of candidate questions sit at Bloom's Level 1 (pure recall), compared to a benchmark/PYQ set that, while also recall-heavy, consistently embeds recall within clinically meaningful or legally precise contexts. The candidate pool adds almost no reasoning demand on top of that recall. Only 3 candidate questions reach Bloom's Level 3 or above, versus 9 in the combined gold set.

Beyond the Bloom's distribution, the reviewed set shows five actionable problem types: factually unsafe or wrong-key items; broken delivery due to missing images; low-value trivia with no realistic exam relevance; direct duplication of question content across different IDs; and worthwhile concepts executed so poorly that the stem or options must be substantially rewritten before the item is usable. A small number of items also show subject or topic misplacement.

Headline numbers from this sample:

Issue Category Approximate Count in Sample Recommended Action
Wrong Key or Factually Unsafe 5–6 Disable or urgent fix
Wrong Subject / Topic Misplacement 3–4 Reroute or disable
Broken Delivery (missing image) 6–8 Disable until image confirmed
Low-Value But Correct (trivia, no exam relevance) 25–30 Disable
Repetitive / Duplicative Coverage 4–5 pairs Disable weaker duplicate
Worthwhile Concept, Weak Execution 15–20 Fix stem/options

The overall usable rate of the candidate sample, without remediation, is estimated at roughly 40–45%. The remaining items require either immediate disabling or substantive editorial work before they can be safely deployed.


What Good Looks Like

The benchmark and PYQ items establish a clear quality bar. Several features distinguish them from the weak items in the candidate sample:

1. Recall questions are anchored to a specific legal or clinical decision point. The benchmark question on BNS grievous hurt (0ece3701) does not ask "what is grievous hurt?" — it asks which of four plausible-sounding options falls outside the statutory definition. The distractor "dislocation of elbow" is a genuine trap because students conflate joint dislocation with the listed grievous hurt categories. Every option requires the candidate to apply the definition, not just recognise a word.

2. Clinical vignettes are used even at Bloom's Level 3–4. The PYQ on potassium permanganate poisoning (249c9c73) gives an occupational history, a symptom cluster, and asks for the suspected poison. The dye factory context is a real discriminator — it rules out arsenic and lead as primary suspects and points toward an oxidising agent. The PYQ on seawater drowning (5aee0274) requires the candidate to reason through the osmotic physiology of saltwater aspiration to select the correct combination of findings. These are not hard questions, but they demand more than name-matching.

3. Image-dependent questions are genuinely image-dependent. The PYQ on hesitation cuts (e6b91658) and the ligature mark abrasion (84e287e9) use images that carry the diagnostic load. The text stem alone would be ambiguous; the image resolves it. This is the correct use of image-based items.

4. Distractors are plausible and educationally meaningful. In the MTP Act question (92f67d8f), all four gestational age ranges are clinically relevant thresholds under the amended Act. A candidate who has not studied the 2021 amendment carefully will be genuinely uncertain between 20–24 weeks and 12–16 weeks. Contrast this with candidate items where one distractor is "None of the above" or where three options are obviously wrong.

5. Legislative currency is maintained. The benchmark set references BNS (Bharatiya Nyaya Sanhita) and the MTP Amendment Act 2021 — the current operative law. Several candidate items still reference IPC section numbers without acknowledging the BNS transition, which creates a factual currency problem.


Main Issue Categories


1. Wrong Key or Factually Unsafe

Why this pattern is bad

A wrong key is the most dangerous defect in a question bank. It actively teaches incorrect information, penalises well-prepared candidates, and, in a subject like Forensic Medicine where legal thresholds and pharmacological facts have direct clinical consequences, it can propagate errors into practice. Items in this category must be treated as urgent.

How it shows up

In this sample, wrong-key problems appear in three sub-patterns: (a) a factually incorrect answer is marked correct; (b) the correct answer is present but a different option is marked correct due to a classification error; (c) the stem contains a factual premise that is itself wrong, making all options potentially misleading.

Example question IDs and explanations

2f52a516 — "Which of the following causes respiratory depression?" The marked correct answer is Strychnine. This is factually wrong. Strychnine is a spinal cord stimulant (glycine antagonist) that causes convulsions and death by respiratory muscle exhaustion secondary to tetanic spasm — not by central respiratory depression. Opium and barbiturates, both listed as distractors, are the canonical causes of central respiratory depression. This item will actively mislead candidates preparing for toxicology questions. The key must be changed to Opium or Barbiturate, but the stem as written ("which causes respiratory depression") then becomes a low-value recall item. Recommended disposition: Disable — the concept is better tested by a vignette-based item already likely present in the gold set.

b8d8e402 — "A postmortem cherry red colour is indicative of which type of poisoning?" The marked correct answer is Cyanide poisoning. This is a known ambiguity that has caused repeated exam controversy. Cherry-red postmortem lividity is classically associated with carbon monoxide poisoning in standard Indian forensic textbooks (Reddy, Pillay). Cyanide can produce a similar appearance but is not the primary or most reliable association. The absence of CO as a distractor option makes this item additionally problematic — the candidate cannot demonstrate nuanced knowledge. Recommended disposition: Fix — rewrite to include CO as an option and either change the key or add a qualifying note in the explanation; alternatively, disable if a better CO/cyanide differentiation item exists.

7d572ffc / 9f21a162 — "A bullet is typically picked up using which instrument?" Both items mark "Hands" as correct. Standard forensic procedure requires that bullets be picked up with rubber-gloved hands or non-metallic forceps to avoid contamination and preserve ballistic markings. Bare hands would contaminate the evidence. The answer "Hands" is at best incomplete and at worst operationally wrong in a medicolegal context. Recommended disposition: Disable both — the concept is low-yield and the key is unsafe.

9037d617 — "Maximum sentencing powers of a First Class Magistrate" The marked answer is "Fine up to 10,000 and 3 years imprisonment." Under the Code of Criminal Procedure (CrPC) as it stood, a First Class Magistrate could impose imprisonment up to 3 years and fine up to ₹10,000 — this is broadly correct for the old CrPC. However, with the Bharatiya Nagarik Suraksha Sanhita (BNSS) now operative, these figures have changed. The item does not specify which code it is testing, creating a legal currency problem. Recommended disposition: Fix — update to BNSS provisions and specify the applicable code in the stem.

d17d911c — "Minimum period for which a doctor should preserve patient records per MCI guidelines" The marked answer is 3 years. The MCI (now NMC) guidelines specify 3 years for outpatient records and 5 years for inpatient records in most interpretations, but the NMC Code of Medical Ethics 2023 and hospital accreditation standards (NABH) specify different retention periods. The question is ambiguous because it does not specify the type of record. Additionally, the MCI no longer exists as a regulatory body — it has been replaced by the NMC. Recommended disposition: Fix — specify record type and update the regulatory body reference to NMC.

Recommended disposition summary for this category: Disable 2f52a516, 7d572ffc, 9f21a162. Fix b8d8e402, 9037d617, d17d911c before any deployment.


2. Wrong Subject or Wrong Topic Misplacement

Why this pattern is bad

Topic misplacement creates noise in topic-level analytics, breaks quiz and plan logic that relies on topic tagging, and confuses candidates who are studying a specific area. It is a separate problem from weak writing — a well-written question in the wrong topic bucket is still misplaced.

How it shows up

In this sample, misplacement appears in two forms: (a) a question is tagged to a Forensic Medicine topic but the core concept belongs to another subject; (b) a question is tagged to the wrong Forensic Medicine sub-topic.

Example question IDs and explanations

dd207122 — "Fear of darkness is called:" (tagged: Sexual Offences and Abortion) This question asks for the term "nyctophobia." It has no connection to sexual offences or abortion. The concept belongs to Forensic Psychiatry (phobias and paraphilias) at best, or to Psychiatry as a subject. Its placement in Sexual Offences and Abortion is unexplained and will corrupt topic-level performance data for that sub-topic. Recommended disposition: Reroute to Forensic Psychiatry; evaluate whether the item has sufficient exam relevance to keep (see also Category 4).

69c52303 — "What is synonymous with a pugilistic attitude?" (tagged: Forensic Psychiatry) Pugilistic attitude is a postmortem finding in burn victims — it is a topic in Forensic Pathology (postmortem changes / burns), not Forensic Psychiatry. The term "pugilistic" may have triggered a psychiatric tag due to its association with aggression, but the forensic meaning is entirely anatomical. Recommended disposition: Reroute to Forensic Pathology / Injuries and Their Significance.

782941bc — "All of the following are tests done on blood, except: Acid phosphatase test" (tagged: Asphyxial Deaths) This question tests knowledge of forensic serology — specifically, which tests are used to identify blood stains. Acid phosphatase is a test for seminal fluid, not blood. The question belongs to Identification or DNA Profiling and Forensic Biology, not Asphyxial Deaths. The topic tag appears to have been assigned by proximity to another question in a batch rather than by content. Recommended disposition: Reroute to Identification; also evaluate the key (see Category 6 — the question is conceptually sound but the topic tag is wrong).

Recommended disposition summary for this category: Reroute all three. After rerouting, re-evaluate each for quality under the appropriate topic.


3. Broken Delivery (Missing Image, Malformed Options, Incomplete Stem)

Why this pattern is bad

A question that depends on an image to be answerable, but where the image is absent or unconfirmed in the candidate sample data, is functionally unanswerable. Deploying such items creates a negative candidate experience, generates spurious wrong-answer data, and undermines trust in the platform. This is a delivery problem, not a conceptual one — the underlying question may be excellent, but it cannot be used until the image is confirmed present and rendering correctly.

How it shows up

Several items in the candidate sample have stems that are grammatically or logically incomplete without a visual referent ("What is incorrect about the image shown below?", "What is the name given for the torture method shown below?", "What type of inflicted weapon is suggested by the wound characteristics?"). In the data provided, no image attachment is visible. These items cannot be evaluated for correctness without the image, and they cannot be safely deployed without confirming the image renders.

A secondary broken-delivery pattern is the use of "None of the above" as a distractor, which is a weak option construction that provides no diagnostic information about candidate knowledge.

Example question IDs and explanations

9de839cd — "What is incorrect about the image shown below?" (Asphyxial Deaths) The entire question depends on an image of foam/froth at the mouth in a drowning case. Without the image, the stem is unanswerable. The options reference specific properties of the foam (cone shape, collapse on touch), which are testable concepts, but the "incorrect" framing combined with image dependency makes this high-risk. If the image is confirmed present and correct, this is actually a reasonable Bloom's Level 4 item. Recommended disposition: Hold — confirm image renders correctly before deployment; if image is absent, disable.

0c1f1053 — "What is the name given for the torture method shown below?" (Injuries and Their Significance) Image-dependent. The options (Telefono, Parrot's perch, Dunking, Felanga) are all named torture methods. Without the image, the question is a random guess. Recommended disposition: Hold — confirm image; if absent, disable.

b4b75da0 — "What type of inflicted weapon is suggested by the wound characteristics?" (Injuries and Their Significance) The stem references "wound characteristics" but no image or description of the wound is provided in the text. The options (screwdriver, single-edged knife, double-edged knife, ice pick) require a visual or descriptive wound pattern to discriminate. As written without an image, this is unanswerable. Recommended disposition: Hold — confirm image; if absent, disable or rewrite as a text-based vignette describing the wound.

425e1059 — "The appearance of lines in nails as shown below is seen in?" (Forensic Toxicology) The stem explicitly says "as shown below" and references a NEET 2016–17 pattern. The image of Mees' lines (transverse white bands in arsenic poisoning) is the entire diagnostic content. Without the image, this is a pure recall question about Mees' lines — which is already covered by better-written items in the gold set. Recommended disposition: Hold — confirm image; if absent, disable (the concept is covered by cc163ecf and the gold-standard arsenic PYQ).

e6b91658 (PYQ gold) — note for contrast: This PYQ item also depends on an image (hesitation cuts), but it is in the gold set and presumably has a confirmed image. The candidate items above do not have that confirmation.

Additional broken-delivery note — "None of the above" options: Items e7fbce67 and 343f70a5 use "None of the above" as a distractor. This is a weak option construction that provides no educational signal and is generally avoided in high-quality MCQ design. These items should be fixed to replace "None of the above" with a substantive distractor.

Recommended disposition summary for this category: Hold all image-dependent items pending image confirmation. Disable if images are absent. Fix "None of the above" options in e7fbce67 and 343f70a5.


4. Low-Value But Correct (Too Simple, Low-Yield, Trivia-Heavy, Weak Exam Relevance)

Why this pattern is bad

A question can be factually correct and still be a poor exam item. In this sample, the dominant quality failure is not wrong keys — it is the large volume of items that test isolated, decontextualised facts with no clinical or legal reasoning demand, no plausible distractors, and no realistic chance of appearing in a competitive PG exam. These items inflate the question count without adding discriminative value. They also crowd out higher-quality items in daily plans and quizzes, reducing the overall learning signal for candidates.

The Bloom's distribution makes this concrete: 64 of 100 candidate questions are Bloom's Level 1. Of those, a substantial proportion test facts that are either (a) so well-known that no competitive candidate would get them wrong, or (b) so obscure and low-frequency that they have never appeared in any major exam and are unlikely to do so.

How it shows up

Sub-patterns observed in this sample:

  • Definitional recall with no discriminating distractors: The correct answer is obvious from the stem; the wrong options are implausible.
  • Eponym-to-finding matching with no clinical context: "Burton's line is seen in which poisoning?" — correct answer is Lead, and the other three options (Mercury, Arsenic, Zinc) are all metals but none is a realistic confusion for a prepared candidate.
  • Statutory section number recall: "Which IPC section deals with grievous injury?" — Section 320. This is a number-memorisation task with no reasoning component and is increasingly obsolete given the BNS transition.
  • Obscure trivia with no exam precedent: "What substance is used in sin needles for animal poisoning?" — Rati seeds. This has no realistic exam relevance and no clinical application.
  • Phobia naming: "Fear of darkness is called Nyctophobia." This is a vocabulary question, not a forensic medicine question.

Example question IDs and explanations

a00e289d — "Rigor mortis occurs due to: Muscle of the body began to stiffen" This is the most basic possible question about rigor mortis. The correct answer is self-evident from the term "rigor" (Latin: stiffness). The distractors include "mummification of body tissues," which is a completely different postmortem change. No competitive PG candidate would miss this. Recommended disposition: Disable — the concept is covered far better by the rigor mortis questions in the gold set and by 99107deb (which at least tests a specific, non-obvious fact about rigor mortis in fetuses).

27578bde — "Burton's line is seen with poisoning of which metal?" Factually correct, but this is a pure eponym-recall item. The distractors (Mercury, Arsenic, Zinc) are not realistic traps for a prepared candidate. The concept is exam-relevant, but the execution is too simple. Recommended disposition: Disable — replace with a vignette-based item (e.g., a patient with occupational lead exposure presenting with gingival changes, asking for the finding name or the mechanism).

f459a54a — "What substance is commonly used in sin needles for animal poisoning?" Rati seeds (Abrus precatorius). This is highly obscure trivia. It has not appeared in any major PG exam in the reviewed set and has no clinical application for a physician. Recommended disposition: Disable.

dd207122 — "Fear of darkness is called:" Already flagged under topic misplacement. Even if rerouted to Forensic Psychiatry, this is a vocabulary question with no forensic or clinical reasoning demand. Recommended disposition: Disable.

92ee5378 — "Which method is used for torture in China?" "Thighs tied with bamboo." This is obscure cultural trivia about historical torture methods. It has no clinical, legal, or forensic reasoning application. It has not appeared in any major exam in the reviewed set. Recommended disposition: Disable.

8cb49dc8 — "Which IPC section deals with grievous injury? Section 320" Statutory section number recall. With the BNS now operative, IPC section numbers are being phased out of current exam syllabi. Even under the old IPC, this was a low-reasoning item. Recommended disposition: Disable — the benchmark question (0ece3701) tests the same concept (grievous hurt under BNS) at a far higher quality level.

1342ac38 — "Multiplication factor to estimate height from foot length: 7" Pure numerical recall with no clinical context. The distractors (5, 6, 8) are arbitrary numbers. Recommended disposition: Disable.

7cee520f — "In frostbite, when does skin become hard and black? 2 weeks" Specific numerical threshold for a rare environmental condition. Low exam frequency, no clinical reasoning demand. Recommended disposition: Disable.

f30db4d0 — "IPC Section 141 is related to: Unlawful assembly" Section number recall for a general IPC provision that is not specific to forensic medicine practice. Recommended disposition: Disable.

6f1e41e5 — "Subpoena is a term used in which context? Legal document" Vocabulary question. Any candidate who has read a single page of medical jurisprudence knows this. Recommended disposition: Disable.

Additional items in this category (brief notes, same disposition — Disable):

  • 86fefdc4 (Soot in respiratory tract = Burns): too simple, no distractors are plausible.
  • 235c691d (Rule of Nines = Burns): basic recall, no reasoning.
  • 1fed58b4 (McEwan's sign above 300 mg%): obscure numerical threshold, low exam frequency.
  • 71840da8 (Double base smokeless powder = nitrocellulose + nitroglycerine): ballistics trivia, low clinical relevance.
  • 8cb1e56e (Y chromosome in dental pulp up to 12 months): specific numerical threshold, low exam frequency.
  • f65e7b19 (Jet black tongue = Cocaine): factually questionable — cocaine causes local vasoconstriction and necrosis but "jet black tongue" is not a standard exam-level finding for cocaine; this may also belong in Category 1.

Recommended disposition summary for this category: Disable the approximately 25–30 items in this pattern. Do not attempt to rewrite definitional recall items into vignettes unless the underlying concept is genuinely high-yield and not already covered by a better item.


5. Repetitive or Duplicative Coverage

Why this pattern is bad

Duplicate items waste question bank capacity, create inconsistent candidate experiences when both versions appear in the same quiz or plan, and — most dangerously — when the two versions have different keys for the same question, they actively confuse candidates. In this sample, duplication appears both as near-identical stems across different IDs and as conceptual overlap where two items test the same single fact with only superficial wording differences.

How it shows up

The most striking duplication in this sample is a direct stem-and-options clone across two different topic tags. There is also conceptual duplication where the same narrow fact is tested by multiple items that add no incremental learning value.

Example question IDs and explanations

7d572ffc and 9f21a162 — "A bullet is typically picked up using which instrument?" These two items have identical stems, identical options, and identical keys ("Hands"). They are filed under different topics (Injuries and Their Significance vs. Forensic Science in Court respectively). This is a direct clone. As noted in Category 1, the key is also factually unsafe. Recommended disposition: Disable both — the key is wrong, and even if fixed, one item is sufficient.

08ff8d57 and the PYQ pair (050daa97 / e290187b) — Burking The candidate item 08ff8d57 asks "Identify the type of homicide caused by smothering and traumatic asphyxia" with the correct answer Burking. The PYQ set contains two versions of the Burking question (050daa97 and e290187b) — notably, these two PYQs themselves have conflicting keys: 050daa97 marks Smothering as correct for the scenario of sitting on chest and covering nose/mouth, while e290187b marks Burking as correct for the same scenario. This is a known exam controversy. The candidate item 08ff8d57 adds a third version of the same concept. Recommended disposition for 08ff8d57: Disable — the concept is already covered (and contested) in the PYQ set; adding a third version without resolving the key conflict makes the situation worse.

c431b5ef — "Kennedy phenomenon" and the broader firearms wound cluster Multiple items in the sample test narrow facts about firearm wounds (calibre definition, fragmenting bullet characteristics, Kennedy phenomenon). These are individually low-yield and collectively represent over-coverage of a single sub-topic. The Kennedy phenomenon item (c431b5ef) is particularly obscure — it is not a standard exam topic in major Indian PG exams. Recommended disposition: Disable c431b5ef; retain 13d32ae0 (fragmenting bullet) only if the key is verified.

Rigor mortis coverage: Items a00e289d (rigor mortis = stiffening), 99107deb (rigor mortis in fetuses), and the broader postmortem changes cluster represent multiple items on the same narrow topic. 99107deb is the better item (tests a specific, non-obvious fact). a00e289d should be disabled as noted in Category 4.

Recommended disposition summary for this category: Disable 7d572ffc, 9f21a162, and 08ff8d57. Audit the full Burking/smothering cluster to resolve the key conflict in the PYQ pair before any of these items are deployed together.


6. Worthwhile Concept, Weak Execution (Keep the Concept, Fix the Stem/Options/Vignette)

Why this pattern is bad

These items test concepts that are genuinely exam-relevant and clinically or legally important, but the stem is written in a way that reduces discriminative power, introduces ambiguity, or fails to challenge a prepared candidate. The concept should not be abandoned — it should be rewritten. This is the most actionable category for the content operations team because the underlying knowledge is sound; only the delivery needs work.

How it shows up

Sub-patterns in this sample:

  • Correct concept, but "EXCEPT" framing with implausible distractors: The correct answer is obvious because the wrong options are clearly unrelated to the topic.
  • Correct concept, but stem gives away the answer: The phrasing of the stem contains the answer or makes one option obviously correct.
  • Correct concept, but options are not parallel or are poorly constructed: One option is much longer or more specific than the others, signalling the correct answer.
  • Correct concept, but legislative reference is outdated (IPC vs. BNS).

Example question IDs and explanations

66ea17be — "Which sign is most indicative of antemortem burns? Presence of soot in the airways" The concept (differentiating antemortem from postmortem burns) is high-yield and exam-relevant. However, the correct answer — soot in the airways — is also the answer to the question in 86fefdc4 ("Soot particles in the respiratory tract indicate death due to burns"). The distractor "cherry-red skin appearance" is actually a feature of CO poisoning, not burns per se, which introduces a secondary ambiguity. The stem should be rewritten as a clinical vignette: a body recovered from a house fire, with specific autopsy findings listed, asking the candidate to identify which finding confirms the victim was alive when the fire started. Recommended disposition: Fix — rewrite as vignette; replace cherry-red skin with a more appropriate distractor (e.g., heat fractures of bone, epidural haemorrhage from heat).

fc734073 — "Which is NOT true about subendocardial haemorrhage? Involves the right ventricular wall" The concept (subendocardial/subepicardial haemorrhage as a postmortem finding) is exam-relevant. The correct answer — that it involves the left ventricular wall, not the right — is a genuine discriminator. However, the "NOT true" framing with four options of unequal plausibility weakens the item. The option "Has a continuous sheet-like pattern" is a genuine trap (it is actually characteristically flame-shaped/discontinuous), which is good distractor design. The item is close to usable but needs the stem rewritten to positive framing and the options balanced. Recommended disposition: Fix — convert to positive framing ("Which of the following is a feature of subendocardial haemorrhage?") and verify all four options against a standard reference.

eef9ffca — "All of the following are deliriant poisons, EXCEPT: Aconite" The concept (classification of poisons by mechanism) is exam-relevant and tested in the gold set (fbf909e6). Aconite is correctly identified as a cardiac/neurotoxic poison, not a deliriant. However, the "EXCEPT" format with a list of three correct items and one exception is a weak design. A better version would present a clinical scenario of a patient with delirium, hallucinations, and dry mouth, asking which poison is most likely — forcing the candidate to apply the classification rather than recall it. Recommended disposition: Fix — rewrite as application-level vignette.

736c2023 — "A pregnant female at 10 weeks requests MTP. Legally, up to what gestational period is MTP acceptable? 140 days" The concept (MTP Act gestational limits) is high-yield and directly tested in the gold set (92f67d8f). The answer "140 days" (20 weeks) is correct for the upper limit with one RMP opinion under the 2021 amendment. However, the question is poorly constructed: it presents a patient at 10 weeks and asks for the legal upper limit, which is a different question from what the clinical scenario implies. The scenario suggests the candidate should advise the patient — but the question asks for an abstract legal threshold. The stem and scenario are misaligned. Additionally, the option "63 days" (9 weeks) is the threshold for medical abortion without RMP opinion under some interpretations, which is a genuine trap that is not being used effectively here. Recommended disposition: Fix — align the clinical scenario with the question being asked; use the gestational thresholds as discriminating distractors more carefully.

dfdb76ae — "In carbamate poisoning, all the following should be administered except: Oximes" The concept (oximes are contraindicated in carbamate poisoning because the carbamate-cholinesterase bond is reversible and oximes may worsen toxicity) is genuinely high-yield and exam-relevant. The item is factually correct. The weakness is the "all except" format and the absence of a clinical context. A vignette of a farmer presenting with cholinergic features after pesticide exposure, asking which treatment should be withheld, would be a Bloom's Level 4 item testing the same concept. Recommended disposition: Fix — rewrite as clinical vignette.

2b5f7eed — "Privileged communication is made between: Doctor and concerned authority" The concept of privileged communication is exam-relevant. However, the correct answer "Doctor and concerned authority" is vague and potentially misleading — privileged communication in the medicolegal sense refers to communication that is protected from disclosure in court, typically between doctor and patient. The question appears to be asking about the reporting of privileged information (i.e., when a doctor is legally required to disclose to authorities), which is a different concept. The stem and key are conceptually confused. Recommended disposition: Fix — clarify whether the question is about the definition of privileged communication or the exceptions to it; rewrite accordingly.

eaeb93a5 — "Which condition can lead to a false-negative hydrostatic test in a live-born fetus? Atelectasis" The concept (limitations of the hydrostatic/docimasia test) is high-yield for infanticide questions. Atelectasis causing a false-negative (lungs sink even though baby was born alive) is correct. However, the distractor "putrefaction" is actually a cause of false-positive (lungs float even though baby was stillborn), not false-negative — this is a genuine trap that is being wasted as a distractor without explanation. The item would be stronger if it asked candidates to distinguish false-positive from false-negative causes. Recommended disposition: Fix — rewrite to test the distinction between false-positive and false-negative causes of the hydrostatic test.

Additional items in this category (brief notes):

  • 844f606d (Knot behind left ear in hanging = judicial hanging): The marked answer "self-inflicted hanging" is questionable — a knot behind the left ear is the standard position in judicial hanging, not self-inflicted. This may also belong in Category 1. Recommended disposition: Urgent review — verify key against standard references; likely a wrong key.
  • cc163ecf (Arsenic poisoning vignette with Mees' lines): This is actually a good item — clinical vignette, Bloom's Level 2, correct key. The only weakness is that garlic odour is shared with phosphorus poisoning and the stem should specify that the odour is from breath/sweat rather than gastric contents to be unambiguous. Recommended disposition: Minor fix — add "on breath and sweat" to the garlic odour description.
  • 99e0017c (Diatoms in bone marrow = drowning): Correct and exam-relevant. The topic tag (Injuries and Their Significance) is slightly off — this belongs in Asphyxial Deaths or Forensic Pathology. Minor reroute recommended.

Recommended disposition summary for this category: Fix 66ea17be, fc734073, eef9ffca, 736c2023, dfdb76ae, 2b5f7eed, eaeb93a5, cc163ecf. Urgently review 844f606d for possible wrong key.


Prioritization

The table below ranks issue categories by urgency and operational impact. Actions are ordered for the content operations team.

Priority Category Urgency Rationale
1 Wrong Key or Factually Unsafe Immediate Actively teaches wrong information; patient safety and exam integrity risk
2 Broken Delivery (Missing Image) Immediate Unanswerable items deployed live create negative candidate experience and corrupt analytics
3 Repetitive / Duplicative with Conflicting Keys High The Burking cluster has two PYQs with different keys for the same scenario — this must be resolved before any Burking item is deployed
4 Wrong Subject / Topic Misplacement High Corrupts topic-level analytics and quiz logic; quick fix (reroute)
5 Worthwhile Concept, Weak Execution Medium Items are not harmful but are underperforming; fix improves discriminative value
6 Low-Value But Correct (Trivia) Medium Large volume; disabling improves overall bank quality and candidate experience
7 Repetitive / Duplicative (No Key Conflict) Low Disable weaker duplicate; no urgency if both are currently inactive

Bloom's distribution remediation is a cross-cutting priority. The candidate pool is 64% Bloom's Level 1. The content team should set a target of no more than 40% Bloom's Level 1 for new Forensic Medicine items, with at least 20% at Bloom's Level 3–4. The fastest path to improving the distribution is not rewriting existing Level 1 items (which is expensive) but ensuring that all new items entering the bank are written at Level 3–4 by default, using clinical vignettes or legal scenario formats.


Example Keep / Fix / Disable Calls

The following table provides concrete disposition recommendations for a representative selection of items from the candidate sample. These are production-ready calls, not suggestions for further review (except where explicitly noted).

Question ID Topic Disposition Reason
cc163ecf Forensic Toxicology Keep (minor fix) Good clinical vignette, correct key, exam-relevant. Add "on breath and sweat" to garlic odour description to remove phosphorus ambiguity.
dfdb76ae Forensic Toxicology Fix Correct key (oximes contraindicated in carbamate poisoning), high-yield concept. Rewrite as clinical vignette of cholinergic toxidrome.
eaeb93a5 Infanticide Fix Correct key (atelectasis = false-negative hydrostatic test), exam-relevant. Rewrite to test false-positive vs. false-negative distinction.
fc734073 Medicolegal Autopsies Fix Correct key, exam-relevant concept. Convert from "NOT true" to positive framing; balance option specificity.
66ea17be Injuries and Their Significance Fix Correct key, high-yield concept. Rewrite as autopsy vignette; replace cherry-red skin distractor.
736c2023 Sexual Offences and Abortion Fix Correct key, high-yield topic (MTP Act). Align clinical scenario with the question being asked; restructure gestational thresholds as discriminating distractors.
2f52a516 Forensic Toxicology Disable Wrong key. Strychnine does not cause central respiratory depression; opium/barbiturates do. Actively misleading.
7d572ffc Injuries and Their Significance Disable Wrong key (bare hands is unsafe forensic practice) AND direct duplicate of 9f21a162.
9f21a162 Forensic Science in Court Disable Direct clone of 7d572ffc; wrong key.
08ff8d57 Asphyxial Deaths Disable Duplicates the contested Burking PYQ cluster; adds no value while the key conflict in the PYQ pair remains unresolved.
a00e289d Forensic Pathology Disable Trivially simple; answer self-evident from the word "rigor." No discriminative value.
f459a54a Forensic Toxicology Disable Obscure trivia (sin needles / rati seeds); no exam precedent, no clinical application.
92ee5378 Identification Disable Obscure cultural trivia (Chinese torture method); no forensic reasoning demand, no exam relevance.
dd207122 Sexual Offences and Abortion Disable Wrong topic tag AND vocabulary-level trivia (nyctophobia). No forensic medicine content.
8cb49dc8 Injuries and Their Significance Disable IPC section number recall; obsolete given BNS transition; concept covered better by benchmark item 0ece3701.
b8d8e402 Forensic Toxicology Fix (urgent) Key is ambiguous — cherry-red lividity is primarily associated with CO poisoning in standard Indian texts, not cyanide. Add CO as an option and revise key or add detailed explanation.
9037d617 Medical Jurisprudence Fix Key may be correct under old CrPC but is legally outdated under BNSS. Update to current law and specify applicable code in stem.
d17d911c Legal and Ethical Aspects Fix MCI no longer exists (replaced by NMC); record retention period varies by record type. Specify record type and update regulatory body.
9de839cd Asphyxial Deaths Hold Image-dependent; potentially good Bloom's Level 4 item if image is confirmed. Disable if image absent.
425e1059 Forensic Toxicology Hold Image-dependent (Mees' lines). Concept already covered by cc163ecf and gold-set arsenic items. Disable if image absent; if image present, evaluate for duplication.
844f606d Asphyxial Deaths Urgent review Knot behind left ear is the standard position in judicial hanging, not self-inflicted hanging. Possible wrong key — verify against Reddy/Pillay before any deployment.
69c52303 Forensic Psychiatry Reroute + Keep Pugilistic attitude belongs in Forensic Pathology / Burns. After rerouting, item is factually correct and usable as a low-level recall item.
782941bc Asphyxial Deaths Reroute + Fix Belongs in Identification/Forensic Serology. After rerouting, rewrite stem to specify "forensic identification of blood stains" context.
eef9ffca Forensic Toxicology Fix Correct key (aconite is not a deliriant), exam-relevant classification. Rewrite as application-level vignette rather than "EXCEPT" list.
2b5f7eed Medical Jurisprudence Fix Conceptually confused stem (conflates definition of privileged communication with reporting obligations). Rewrite to clarify which aspect is being tested.

This report covers the reviewed candidate sample only. Conclusions about the full 5,504-item bank should not be drawn without a broader audit. The patterns identified here — particularly the Bloom's Level 1 skew, the image-dependency without confirmed image delivery, and the IPC-to-BNS legislative currency gap — are likely to appear at higher absolute volumes across the full bank given the sample proportions observed.