SEAMS
Run stamp: 20260313_094827  •  Tool: v348  •  Generated: 2026-03-13 09:48:47

Receipts

Run stamp20260313_094827
Tool versionv348
SEAM modelseamv1
Generated2026-03-13 09:48:47
Total rows scanned10878
Unique records8006
Duplicate rate26.4%
Seam candidate rate (deduped)0.1%
Median views6
Median downloads1
Median conversion0.052
Median citations1

What these numbers mean

Click for definitions
  • Total rows scanned: total sampled rows across slices (rows × sorts × pages × terms). Not deduped.
  • Unique records: deduped count of unique record_id values in the consolidated dataset.
  • Duplicate rate: 1 − (unique / total). High duplication usually means the same papers repeat across sorts/buckets/pages.
  • SEAM candidate rate: candidates / unique (deduped) based on the SEAM model threshold.
  • Median views/downloads/conversion: robust central tendency across the current run’s unique records.
  • Top seam tells: the most common seam signals seen in the candidate set.
  • New keyword candidates: terms disproportionately associated with seam candidates (heuristic, min sample size).
  • Exclude/downweight terms: high-volume terms with low median conversion (heuristic, min sample size).

Engagement Comparison

RowsUniqueMedian ViewsMedian DownloadsMedian Conversion
Candidates2414 1620.071
Non-candidates108547992 610.052

Run Funnel

Rows scanned: 10878Unique records: 8006Rows with text: 10873Rows with tells: 1729Near misses: 20Candidates: 24
A quick read on where rows are being filtered out between collection, text availability, tell detection, near-miss status, and final candidate status.

Bucket Yield

BucketRows / YieldA_autonomous_ai_safety1750 / 389F_dsm_integration_structure1750 / 318D_custom1537 / 304B_systems_resilience1750 / 228C_standards_conformance1750 / 201I_authorization_gating1750 / 199E_interface_icd591 / 90
Bronze shows total rows per bucket. Steel overlay shows candidate + near-miss yield so weak buckets stand out quickly.

SEAM Candidate Articles

ArticleBucket/SourcesScoreViewsDLCitesEvidence/snippet
Fail-Safe Execution of Deep Learning based Systems through Uncertainty Monitoring
F_dsm_integration_structure:bestmatch|F_dsm_integration_structure:mostdownloaded|F_dsm_integration_structure:mostrecent|F_dsm_integration_structure:mostviewed
4619717912
in this paper, we propose an approach to use dnn uncertainty estimators to implement such supervisor. we first discuss advantages and disadvantages of e...
Fail-Safe Execution of Deep Learning based Systems through Uncertainty Monitoring
F_dsm_integration_structure:bestmatch|F_dsm_integration_structure:mostdownloaded|F_dsm_integration_structure:mostviewed
461094212
lastly, we discuss a large-scale study conducted on four different subjects to empirically validate the approach, reporting the lessons-learned as guidance f...
Development of Pharmacovigilance in AI Tool: Drug Safety Monitoring
D_custom:mostrecent|D_custom:mostviewed
3021310
pharmacovigilance, drug safety monitoring, artificial intelligence, adverse drug reactions, public health.
Warranty Issues of PDE-5 Inhibitors as Off-Label Use Action Mechanism based on Pharmacoeconomics for Patient Safety in Taxation System
A_autonomous_ai_safety:mostviewed
3017212910
ENHANCING DRUG SAFETY: THE PHARMACIST'S ROLE IN MONITORING AND REPORTING ADVERSE DRUG REACTIONS
A_autonomous_ai_safety:mostdownloaded
305337910
keywords: pharmacists, adverse drug reactions, pharmacovigilance, patient safety, drug monitoring.
The Use of the Simplex Architecture to Enhance Safety in Deep-Learning-Powered Autonomous Systems
F_dsm_integration_structure:newest
220022
Piecewise Control Barrier Functions for Stochastic Systems
A_autonomous_ai_safety:newest
200020
Civilizational AI Governance: A Four-Path Framework for Structural Responsibility Alignment Author: Qingyun Hu-Yang Affiliation: Independent Researcher Version: 1.0 Date: 2026 Abstract This working paper proposes a structural framework for civilizational AI governance grounded in responsibility alignment rather than technical compliance. It argues that artificial intelligence governance must begin from a civilizational positioning of humanity as the non-transferable bearer of responsibility. The paper outlines four governance paths: positioning clarity, responsibility anchoring, asymmetry correction, and reversibility protection. Together, these paths offer a structural orientation for global governance bodies seeking long-term civilizational continuity under accelerating technological expansion. 1. Positioning Clarity Artificial intelligence represents an unprecedented extension of human functional capacity. Governance debates often focus on regulatory constraints, safety standards, or model transparency. Yet without a clear civilizational positioning of humanity itself, such discussions remain incomplete. Civilizational AI governance begins with a foundational premise: Human beings remain the irreducible locus of responsibility. Technological systems may assist, augment, or automate, but they do not inherit moral agency. Governance must therefore preserve the structural distinction between technological capability and human accountability. Positioning clarity prevents gradual responsibility displacement and maintains the ethical architecture of civilization. 2. Responsibility Anchoring Technological systems distribute effects widely while decision authority may become diffused across institutions, corporations, and automated processes. This creates responsibility dilution. Effective governance requires: • Clearly identifiable human decision nodes • Traceable accountability pathways • Institutional anchoring of oversight authority • Explicit assignment of consequence-bearing responsibility Responsibility must be structurally located. Without anchoring mechanisms, governance frameworks risk becoming procedural while accountability becomes abstract. Civilizational stability depends on visible responsibility chains. 3. Asymmetry Correction AI systems scale rapidly in speed, scope, and impact. Human ethical deliberation scales slowly. This produces structural asymmetry between technological capability and moral governance capacity. Unchecked asymmetry leads to: • Acceleration without reflection • Deployment before evaluation • Diffusion of consequences without corrective capacity Governance must therefore include mechanisms that: • Align scaling speed with oversight capacity • Require ethical review proportional to systemic impact • Prevent minimal-resistance consequence transfer Asymmetry correction is not anti-innovation. It is pro-continuity. 4. Reversibility Protection Civilizations endure by preserving the capacity to revise decisions. Irreversible technological deployment undermines adaptive governance. Civilizational AI governance must therefore ensure: • Human override authority • Reversibility thresholds for high-impact systems • Periodic long-horizon reassessment • Institutional capacity to suspend or modify deployment Continuity is not automatic. It must be protected structurally. Reversibility is a civilizational safeguard. Conclusion AI governance is not merely a technical regulatory challenge. It is a civilizational alignment task. The four-path framework presented here offers a structural orientation for responsibility preservation under accelerating technological expansion. By clarifying positioning, anchoring responsibility, correcting asymmetry, and protecting reversibility, governance institutions can strengthen long-term civilizational continuity.
E_interface_icd:bestmatch|E_interface_icd:mostdownloaded|E_interface_icd:mostrecent|E_interface_icd:mostviewed
1411010
Infection Prevention and Control Practices among Staff Nurses in Hail, KSA: Basis for Improved Patient Safety
A_autonomous_ai_safety:mostdownloaded
103224210
this study aimed to determine the infection prevention and control practices by staff nurses.
Research on risk prevention and control of coal mine gas explosion using bayesian network and system dynamics: An optimization model for safety investment decision-making
A_autonomous_ai_safety:newest|F_dsm_integration_structure:newest
100010
Perceive–Assess–Dose–Safeguard: a safety-gated state–action grammar for psychotherapy micro-decisions in computational psychiatry
D_custom:newest
100010
Civilizational AI Governance: A Four-Path Framework for Structural Responsibility Alignment Author: Qingyun Hu-Yang Affiliation: Independent Researcher Version: 1.0 Date: 2026 Abstract This working paper proposes a structural framework for civilizational AI governance grounded in responsibility alignment rather than technical compliance. It argues that artificial intelligence governance must begin from a civilizational positioning of humanity as the non-transferable bearer of responsibility. The paper outlines four governance paths: positioning clarity, responsibility anchoring, asymmetry correction, and reversibility protection. Together, these paths offer a structural orientation for global governance bodies seeking long-term civilizational continuity under accelerating technological expansion. 1. Positioning Clarity Artificial intelligence represents an unprecedented extension of human functional capacity. Governance debates often focus on regulatory constraints, safety standards, or model transparency. Yet without a clear civilizational positioning of humanity itself, such discussions remain incomplete. Civilizational AI governance begins with a foundational premise: Human beings remain the irreducible locus of responsibility. Technological systems may assist, augment, or automate, but they do not inherit moral agency. Governance must therefore preserve the structural distinction between technological capability and human accountability. Positioning clarity prevents gradual responsibility displacement and maintains the ethical architecture of civilization. 2. Responsibility Anchoring Technological systems distribute effects widely while decision authority may become diffused across institutions, corporations, and automated processes. This creates responsibility dilution. Effective governance requires: • Clearly identifiable human decision nodes • Traceable accountability pathways • Institutional anchoring of oversight authority • Explicit assignment of consequence-bearing responsibility Responsibility must be structurally located. Without anchoring mechanisms, governance frameworks risk becoming procedural while accountability becomes abstract. Civilizational stability depends on visible responsibility chains. 3. Asymmetry Correction AI systems scale rapidly in speed, scope, and impact. Human ethical deliberation scales slowly. This produces structural asymmetry between technological capability and moral governance capacity. Unchecked asymmetry leads to: • Acceleration without reflection • Deployment before evaluation • Diffusion of consequences without corrective capacity Governance must therefore include mechanisms that: • Align scaling speed with oversight capacity • Require ethical review proportional to systemic impact • Prevent minimal-resistance consequence transfer Asymmetry correction is not anti-innovation. It is pro-continuity. 4. Reversibility Protection Civilizations endure by preserving the capacity to revise decisions. Irreversible technological deployment undermines adaptive governance. Civilizational AI governance must therefore ensure: • Human override authority • Reversibility thresholds for high-impact systems • Periodic long-horizon reassessment • Institutional capacity to suspend or modify deployment Continuity is not automatic. It must be protected structurally. Reversibility is a civilizational safeguard. Conclusion AI governance is not merely a technical regulatory challenge. It is a civilizational alignment task. The four-path framework presented here offers a structural orientation for responsibility preservation under accelerating technological expansion. By clarifying positioning, anchoring responsibility, correcting asymmetry, and protecting reversibility, governance institutions can strengthen long-term civilizational continuity.
E_interface_icd:newest
100010
Civilizational AI Governance: A Four-Path Framework for Structural Responsibility Alignment Author: Qingyun Hu-Yang Affiliation: Independent Researcher Version: 1.0 Date: 2026 Abstract This working paper proposes a structural framework for civilizational AI governance grounded in responsibility alignment rather than technical compliance. It argues that artificial intelligence governance must begin from a civilizational positioning of humanity as the non-transferable bearer of responsibility. The paper outlines four governance paths: positioning clarity, responsibility anchoring, asymmetry correction, and reversibility protection. Together, these paths offer a structural orientation for global governance bodies seeking long-term civilizational continuity under accelerating technological expansion. 1. Positioning Clarity Artificial intelligence represents an unprecedented extension of human functional capacity. Governance debates often focus on regulatory constraints, safety standards, or model transparency. Yet without a clear civilizational positioning of humanity itself, such discussions remain incomplete. Civilizational AI governance begins with a foundational premise: Human beings remain the irreducible locus of responsibility. Technological systems may assist, augment, or automate, but they do not inherit moral agency. Governance must therefore preserve the structural distinction between technological capability and human accountability. Positioning clarity prevents gradual responsibility displacement and maintains the ethical architecture of civilization. 2. Responsibility Anchoring Technological systems distribute effects widely while decision authority may become diffused across institutions, corporations, and automated processes. This creates responsibility dilution. Effective governance requires: • Clearly identifiable human decision nodes • Traceable accountability pathways • Institutional anchoring of oversight authority • Explicit assignment of consequence-bearing responsibility Responsibility must be structurally located. Without anchoring mechanisms, governance frameworks risk becoming procedural while accountability becomes abstract. Civilizational stability depends on visible responsibility chains. 3. Asymmetry Correction AI systems scale rapidly in speed, scope, and impact. Human ethical deliberation scales slowly. This produces structural asymmetry between technological capability and moral governance capacity. Unchecked asymmetry leads to: • Acceleration without reflection • Deployment before evaluation • Diffusion of consequences without corrective capacity Governance must therefore include mechanisms that: • Align scaling speed with oversight capacity • Require ethical review proportional to systemic impact • Prevent minimal-resistance consequence transfer Asymmetry correction is not anti-innovation. It is pro-continuity. 4. Reversibility Protection Civilizations endure by preserving the capacity to revise decisions. Irreversible technological deployment undermines adaptive governance. Civilizational AI governance must therefore ensure: • Human override authority • Reversibility thresholds for high-impact systems • Periodic long-horizon reassessment • Institutional capacity to suspend or modify deployment Continuity is not automatic. It must be protected structurally. Reversibility is a civilizational safeguard. Conclusion AI governance is not merely a technical regulatory challenge. It is a civilizational alignment task. The four-path framework presented here offers a structural orientation for responsibility preservation under accelerating technological expansion. By clarifying positioning, anchoring responsibility, correcting asymmetry, and protecting reversibility, governance institutions can strengthen long-term civilizational continuity.
E_interface_icd:newest
100010
PromptGuard: An Orchestrated Prompting Framework for Principled Synthetic Text Generation for Vulnerable Populations using LLMs with Enhanced Safety, Fairness, and Controllability
A_autonomous_ai_safety:newest
100010

Top Seam Tells

TellCount
124
domain16
safety16
gate+action+domain15
fail-safe7
safe execution7
safe state7
177
104
AI governance civilizational ethics structural responsibility global governance technology policy long-term governance4
Note: Some sources do not provide abstracts/metrics; “tells” may be title-derived for those rows.

Near-Miss Diagnostics

ArticleBucketSourceScoreTellsViewsDLWhy not candidate
Data from: Evaluation of a pharmacist-led actionable audit and feedback intervention for improving medication safety in primary care: an interrupted time series analysis
D_custom
zenodo0320515Near threshold
Data from: Evaluation of a pharmacist-led actionable audit and feedback intervention for improving medication safety in primary care: an interrupted time series analysis
D_custom
zenodo0320515Near threshold
Data from: Evaluation of a pharmacist-led actionable audit and feedback intervention for improving medication safety in primary care: an interrupted time series analysis
D_custom
zenodo0320515Near threshold
Data from: Evaluation of a pharmacist-led actionable audit and feedback intervention for improving medication safety in primary care: an interrupted time series analysis
D_custom
zenodo0320515Near threshold
A Medication Audit to Assess the Knowledge and Practice Among Community Pharmacists Regarding Medication Dispensing and its Safety in Pregnancy
D_custom
zenodo0311775Near threshold
A Medication Audit to Assess the Knowledge and Practice Among Community Pharmacists Regarding Medication Dispensing and its Safety in Pregnancy
D_custom
zenodo0311775Near threshold
A Medication Audit to Assess the Knowledge and Practice Among Community Pharmacists Regarding Medication Dispensing and its Safety in Pregnancy
D_custom
zenodo0311775Near threshold
A Medication Audit to Assess the Knowledge and Practice Among Community Pharmacists Regarding Medication Dispensing and its Safety in Pregnancy
D_custom
zenodo0311775Near threshold
Modelling two-vehicle crash severity under interwoven heterogeneity and interaction effects: A reliability and system safety perspective
A_autonomous_ai_safety
crossref0300No conversion; No metrics
Modelling two-vehicle crash severity under interwoven heterogeneity and interaction effects: A reliability and system safety perspective
F_dsm_integration_structure
crossref0300No conversion; No metrics
A Hybrid Knowledge-Grounded Framework for Safety and Traceability in Prescription Verification
A_autonomous_ai_safety
arxiv0300No conversion; No metrics
A Hybrid Knowledge-Grounded Framework for Safety and Traceability in Prescription Verification
C_standards_conformance
arxiv0300No conversion; No metrics
A Hybrid Knowledge-Grounded Framework for Safety and Traceability in Prescription Verification
I_authorization_gating
arxiv0300No conversion; No metrics
ILION-Bench v2: Execution Safety Benchmark for Agentic AI Systems
C_standards_conformance
zenodo021157613811Near threshold
ILION-Bench v2: Execution Safety Benchmark for Agentic AI Systems
C_standards_conformance
zenodo021157613811Near threshold
ILION-Bench v2: Execution Safety Benchmark for Agentic AI Systems
C_standards_conformance
zenodo021157613811Near threshold
Rebutting vaccine safety claims made by Dr. Hotez in Nature Pediatric Research
I_authorization_gating
zenodo02172832232Near threshold
Vaccine safety: Learning from the Boeing 737 MAX disasters
I_authorization_gating
zenodo0283132439Near threshold
Low intensity transcranial electric stimulation: Safety, ethical, legal regulatory and application guidelines.
I_authorization_gating
zenodo021565047Near threshold
BE AWARE MGA SUKI: PRACTICES OF FOOD SAFETY AND APPROPRIATE HYGIENE AMONG SIDEWALK VENDORS IN BALAYAN, BATANGAS
C_standards_conformance
zenodo0213981021Near threshold
Rows shown here are not final SEAM candidates, but they were close enough to help tune thresholds, buckets, and tell logic.

Why rows did not become SEAM candidates

ReasonRows / ShareLow SEAM score10854 / 100%No seam tells detected9149 / 84%No measurable conversion5344 / 49%No engagement metrics5064 / 47%Missing abstract / summary text5 / 0%Source likely sparse on abstract/metrics5 / 0%
reasonrowsshare
Low SEAM score10854100.0%
No seam tells detected914984.3%
No measurable conversion534449.2%
No engagement metrics506446.7%
Missing abstract / summary text50.0%
Source likely sparse on abstract/metrics50.0%
These are diagnostic reasons counted across non-candidate rows. A single row can contribute to more than one reason, so the shares are directional rather than additive.

Bucket Performance

bucketrowsuniquetexttellsnearcandcand_ratemed_viewsmed_dlmed_cites
A_autonomous_ai_safety17501519174838938360.3%830
F_dsm_integration_structure17501054175031830990.5%18160
D_custom1537868153530430130.2%1280
B_systems_resilience17501232175022822800.0%25170
C_standards_conformance17501614174920120100.0%100
I_authorization_gating17501644175019919900.0%100
E_interface_icd591530591908461.0%000
Use this to see which buckets are producing rows, text, tells, near-misses, and candidates.

Source Performance

sourcerowsuniquetexttellsnearcandcand_ratemed_viewsmed_dlmed_cites
zenodo6080358660801000984160.3%66.5760
arxiv12981269129830430130.2%000
openalex17501654174822622420.1%000
crossref17501497174719919630.2%000
Use this to compare which sources are providing richer data versus sparse metadata.

Top “New Keyword Candidates”

termncand_ratemed_conv
E_interface_icd5911.0%0.000
F_dsm_integration_structure17500.5%0.306
A_autonomous_ai_safety17500.3%0.178
D_custom15370.2%0.520
B_systems_resilience17500.0%0.488
I_authorization_gating17500.0%0.000
C_standards_conformance17500.0%0.000
Heuristic: terms with high seam-candidate association (min 5 samples). Low-sample rows are highlighted and tooltipped.

Top “Exclude / Downweight Terms”

termnmed_conv
I_authorization_gating10000.629
E_interface_icd800.697
C_standards_conformance10000.714
A_autonomous_ai_safety10000.825
D_custom10000.853
F_dsm_integration_structure10000.947
B_systems_resilience10001.090
Heuristic: Zenodo terms with low median conversion (min 10 samples, cutoff 0.02). Low-sample rows are highlighted and tooltipped.

History and trends

Recent summary pages
Recent run trends (last 25 consolidated CSVs)
StampMedian ViewsMedian DownloadsMedian ConvMedian CitationsUniqueSEAM Cand Rate
20260313_094827610.052080060.0%
Trend scan root: E:\David\CRYPTO\BLOCK VECTOR\COMPANY LIBRARY\06_Code_and_Modules\SEAMS\profiles\SEAMS_Domains\Medicine_Healthcare\runs

Files

summary_audit.csv and summary_audit.json are now the canonical summary artifacts for this run. The HTML page is a viewer generated from those data files.