Documentation for the tasks

scprint2.tasks.cell_emb

Classes:

Name Description
Embedder

Functions:

Name Description
compute_classification

Compute classification metrics for the given annotated data.

compute_corr

Compute the correlation between the output and target matrices.

default_benchmark

Run the default benchmark for embedding and annotation using the scPRINT model.

display_confusion_matrix

Display the confusion matrix for true vs predicted cell types.

Embedder

Embedder a class to embed and annotate cells using a model

Parameters:
  • batch_size (int, default: 64 ) –

    The size of the batches to be used in the DataLoader. Defaults to 64.

  • num_workers (int, default: 8 ) –

    The number of worker processes to use for data loading. Defaults to 8.

  • how (str, default: 'random expr' ) –

    The method to be used for selecting valid genes. Defaults to "random expr". - "random expr": random expression - "most var": highly variable genes in the dataset - "some": specific genes (from genelist) - "most expr": most expressed genes in the cell

  • max_len (int, default: 2000 ) –

    The maximum length of the gene sequence given to the model. Defaults to 1000.

  • doclass (bool, default: True ) –

    Whether to perform classification. Defaults to True.

  • pred_embedding (List[str], default: ['all'] ) –

    The list of labels to be used for plotting embeddings. Defaults to [ "cell_type_ontology_term_id", "disease_ontology_term_id", "self_reported_ethnicity_ontology_term_id", "sex_ontology_term_id", ].

  • doplot (bool, default: True ) –

    Whether to generate plots. Defaults to True.

  • keep_all_labels_pred (bool, default: False ) –

    Whether to keep all class predictions. Defaults to False, will only keep the most likely class.

  • genelist (List[str], default: None ) –

    The list of genes to be used for embedding. Defaults to []: In this case, "how" needs to be "most var" or "random expr".

  • save_every (int, default: 40000 ) –

    The number of cells to save at a time. Defaults to 100_000. This is important to avoid memory issues.

  • unknown_label (str, default: 'unknown' ) –

    The label to be used for unknown cell types. Defaults to "unknown".

  • use_knn (bool, default: True ) –

    Whether to use k-nearest neighbors information. Defaults to True.

Methods:

Name Description
__call__

call function to call the embedding

Source code in scprint2/tasks/cell_emb.py
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
def __init__(
    self,
    batch_size: int = 64,
    num_workers: int = 8,
    how: str = "random expr",
    max_len: int = 2000,
    doclass: bool = True,
    pred_embedding: List[str] = [
        "all",
    ],
    doplot: bool = True,
    keep_all_labels_pred: bool = False,
    genelist: Optional[List[str]] = None,
    save_every: int = 40_000,
    unknown_label: str = "unknown",
    use_knn: bool = True,
):
    """
    Embedder a class to embed and annotate cells using a model

    Args:
        batch_size (int, optional): The size of the batches to be used in the DataLoader. Defaults to 64.
        num_workers (int, optional): The number of worker processes to use for data loading. Defaults to 8.
        how (str, optional): The method to be used for selecting valid genes. Defaults to "random expr".
            - "random expr": random expression
            - "most var": highly variable genes in the dataset
            - "some": specific genes (from genelist)
            - "most expr": most expressed genes in the cell
        max_len (int, optional): The maximum length of the gene sequence given to the model. Defaults to 1000.
        doclass (bool, optional): Whether to perform classification. Defaults to True.
        pred_embedding (List[str], optional): The list of labels to be used for plotting embeddings. Defaults to [ "cell_type_ontology_term_id", "disease_ontology_term_id", "self_reported_ethnicity_ontology_term_id", "sex_ontology_term_id", ].
        doplot (bool, optional): Whether to generate plots. Defaults to True.
        keep_all_labels_pred (bool, optional): Whether to keep all class predictions. Defaults to False, will only keep the most likely class.
        genelist (List[str], optional): The list of genes to be used for embedding. Defaults to []: In this case, "how" needs to be "most var" or "random expr".
        save_every (int, optional): The number of cells to save at a time. Defaults to 100_000.
            This is important to avoid memory issues.
        unknown_label (str, optional): The label to be used for unknown cell types. Defaults to "unknown".
        use_knn (bool, optional): Whether to use k-nearest neighbors information. Defaults to True.
    """
    self.batch_size = batch_size
    self.num_workers = num_workers
    self.how = how
    self.max_len = max_len
    self.pred_embedding = pred_embedding
    self.keep_all_labels_pred = keep_all_labels_pred
    self.doplot = doplot
    self.doclass = doclass
    self.genelist = genelist if genelist is not None else []
    self.save_every = save_every
    self.pred = None
    self.unknown_label = unknown_label
    self.use_knn = use_knn

__call__

call function to call the embedding

Parameters:
  • model (Module) –

    The scPRINT model to be used for embedding and annotation.

  • adata (AnnData) –

    The annotated data matrix of shape n_obs x n_vars. Rows correspond to cells and columns to genes.

Raises:
  • ValueError

    If the model does not have a logger attribute.

  • ValueError

    If the model does not have a global_step attribute.

Returns:
  • AnnData( AnnData ) –

    The annotated data matrix with embedded cell representations.

  • dict( dict ) –

    classification metrics results when some ground truth information was available in the anndata.

Source code in scprint2/tasks/cell_emb.py
 79
 80
 81
 82
 83
 84
 85
 86
 87
 88
 89
 90
 91
 92
 93
 94
 95
 96
 97
 98
 99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
def __call__(self, model: torch.nn.Module, adata: AnnData) -> tuple[AnnData, dict]:
    """
    __call__ function to call the embedding

    Args:
        model (torch.nn.Module): The scPRINT model to be used for embedding and annotation.
        adata (AnnData): The annotated data matrix of shape n_obs x n_vars. Rows correspond to cells and columns to genes.

    Raises:
        ValueError: If the model does not have a logger attribute.
        ValueError: If the model does not have a global_step attribute.

    Returns:
        AnnData: The annotated data matrix with embedded cell representations.
        dict: classification metrics results when some ground truth information was available in the anndata.
    """
    # one of "all" "sample" "none"
    model.predict_mode = "none"
    self.pred = None
    prevkeep = model.keep_all_labels_pred
    model.keep_all_labels_pred = self.keep_all_labels_pred
    # Add at least the organism you are working with
    if self.how == "most var":
        sc.pp.highly_variable_genes(
            adata, flavor="seurat_v3", n_top_genes=self.max_len
        )
        self.genelist = adata.var.index[adata.var.highly_variable]
    adataset = SimpleAnnDataset(
        adata,
        obs_to_output=["organism_ontology_term_id"],
        get_knn_cells=model.expr_emb_style == "metacell" and self.use_knn,
    )
    col = Collator(
        organisms=model.organisms,
        valid_genes=model.genes,
        how=self.how if self.how != "most var" else "some",
        max_len=self.max_len,
        add_zero_genes=0,
        genelist=self.genelist if self.how in ["most var", "some"] else [],
        n_bins=model.n_input_bins if model.expr_emb_style == "binned" else 0,
    )
    dataloader = DataLoader(
        adataset,
        collate_fn=col,
        batch_size=self.batch_size,
        num_workers=self.num_workers,
        shuffle=False,
    )
    model.eval()
    model.on_predict_epoch_start()
    device = model.device.type
    prevplot = model.doplot
    model.pred_log_adata = True
    model.doplot = self.doplot and not self.keep_all_labels_pred
    model.save_expr = False
    rand = random_str()
    dtype = (
        torch.float16
        if isinstance(model.transformer, FlashTransformer)
        else model.dtype
    )
    with (
        torch.no_grad(),
        torch.autocast(device_type=device, dtype=dtype),
    ):
        for batch in tqdm(dataloader):
            gene_pos, expression, depth = (
                batch["genes"].to(device),
                batch["x"].to(device),
                batch["depth"].to(device),
            )
            pred = model._predict(
                gene_pos,
                expression,
                depth,
                knn_cells=(
                    batch["knn_cells"].to(device)
                    if model.expr_emb_style == "metacell" and self.use_knn
                    else None
                ),
                knn_cells_info=(
                    batch["knn_cells_info"].to(device)
                    if model.expr_emb_style == "metacell" and self.use_knn
                    else None
                ),
                pred_embedding=self.pred_embedding,
                max_size_in_mem=self.save_every,
                name="embed_" + rand + "_",
            )
            torch.cuda.empty_cache()
            if self.keep_all_labels_pred:
                if pred is not None:
                    self.pred = (
                        pred if self.pred is None else torch.cat([self.pred, pred])
                    )
    model.log_adata(name="embed_" + rand + "_" + str(model.counter))

    model.pos = None
    model.expr_pred = None
    model.embs = None
    if self.keep_all_labels_pred:
        self.pred = (
            model.pred if self.pred is None else torch.cat([self.pred, model.pred])
        )
    model.pred = None
    model.save_expr = True
    try:
        mdir = (
            model.logger.save_dir if model.logger.save_dir is not None else "data"
        )
    except:
        mdir = "data"
    pred_adata = []
    del adataset, dataloader
    for i in range(model.counter + 1):
        file = (
            mdir
            + "/step_"
            + str(model.global_step)
            + "_"
            + model.name
            + "_embed_"
            + rand
            + "_"
            + str(i)
            + "_"
            + str(model.global_rank)
            + ".h5ad"
        )
        pred_adata.append(sc.read_h5ad(file))
        os.remove(file)
    pred_adata = concat(pred_adata)
    pred_adata.obs.index = adata.obs.index

    try:
        adata.obsm["X_scprint_umap"] = pred_adata.obsm["X_umap"]
    except:
        print("too few cells to embed into a umap")
    try:
        adata.obs["scprint_leiden"] = pred_adata.obs["scprint_leiden"]
    except:
        print("too few cells to compute a clustering")

    if self.pred_embedding == ["all"]:
        pred_embedding = ["other"] + model.classes
    else:
        pred_embedding = self.pred_embedding
    if len(pred_embedding) == 1:
        adata.obsm["scprint_emb"] = pred_adata.obsm[
            "scprint_emb_" + pred_embedding[0]
        ].astype(np.float32)

    else:
        adata.obsm["scprint_emb"] = np.zeros(
            pred_adata.obsm["scprint_emb_" + pred_embedding[0]].shape,
            dtype=np.float32,
        )
        i = 0
        for k, v in pred_adata.obsm.items():
            adata.obsm[k] = v.astype(np.float32)
            if model.compressor is not None:
                if i == 0:
                    adata.obsm["scprint_emb"] = v.astype(np.float32)
                else:
                    adata.obsm["scprint_emb"] = np.hstack(
                        [adata.obsm["scprint_emb"], v.astype(np.float32)]
                    )
            else:
                adata.obsm["scprint_emb"] += v.astype(np.float32)
            i += 1
        if model.compressor is None:
            adata.obsm["scprint_emb"] = adata.obsm["scprint_emb"] / i

    for key, value in pred_adata.uns.items():
        adata.uns[key] = value

    pred_adata.obs.index = adata.obs.index
    model.keep_all_labels_pred = prevkeep
    model.doplot = prevplot
    adata.obs = pd.concat([adata.obs, pred_adata.obs], axis=1)
    del pred_adata
    if self.keep_all_labels_pred:
        allclspred = self.pred.to(device="cpu").numpy()
        columns = []
        for cl in model.classes:
            n = model.label_counts[cl]
            columns += [model.label_decoders[cl][i] for i in range(n)]
        allclspred = pd.DataFrame(
            allclspred, columns=columns, index=adata.obs.index
        )
        adata.obs = pd.concat([adata.obs, allclspred], axis=1)

    metrics = {}
    if self.doclass and not self.keep_all_labels_pred:
        for cl in model.classes:
            res = []
            if cl not in adata.obs.columns:
                continue
            class_topred = model.label_decoders[cl].values()

            if cl in model.labels_hierarchy:
                # class_groupings = {
                #    k: [
                #        i.ontology_id
                #        for i in bt.CellType.filter(k).first().children.all()
                #    ]
                #    for k in set(adata.obs[cl].unique()) - set(class_topred)
                # }
                cur_labels_hierarchy = {
                    model.label_decoders[cl][k]: [
                        model.label_decoders[cl][i] for i in v
                    ]
                    for k, v in model.labels_hierarchy[cl].items()
                }
            else:
                cur_labels_hierarchy = {}

            for pred, true in adata.obs[["pred_" + cl, cl]].values:
                if pred == true:
                    res.append(True)
                    continue
                if len(cur_labels_hierarchy) > 0:
                    if true in cur_labels_hierarchy:
                        res.append(pred in cur_labels_hierarchy[true])
                        continue
                    elif true != self.unknown_label:
                        res.append(False)
                    elif true not in class_topred:
                        print(f"true label {true} not in available classes")
                        return adata, metrics
                elif true not in class_topred:
                    print(f"true label {true} not in available classes")
                    return adata, metrics
                elif true != self.unknown_label:
                    res.append(False)
                # else true is unknown
                # else we pass
            if len(res) == 0:
                # true was always unknown
                res = [1]
            if self.doplot:
                print("    ", cl)
                print("     accuracy:", sum(res) / len(res))
                print(" ")
            metrics.update({cl + "_accuracy": sum(res) / len(res)})
    self.pred = None
    return adata, metrics

compute_classification

Compute classification metrics for the given annotated data.

Parameters:
  • adata (AnnData) –

    The annotated data matrix of shape n_obs x n_vars. Rows correspond to cells and columns to genes.

  • classes (List[str]) –

    List of class labels to be used for classification.

  • label_decoders (Dict[str, Any]) –

    Dictionary of label decoders for each class.

  • labels_hierarchy (Dict[str, Any]) –

    Dictionary representing the hierarchy of labels.

  • metric_type (List[str], default: ['macro', 'micro', 'weighted'] ) –

    List of metric types to compute. Defaults to ["macro", "micro", "weighted"].

Returns:
  • Dict[str, Dict[str, float]]

    Dict[str, Dict[str, float]]: A dictionary containing classification metrics for each class.

Source code in scprint2/tasks/cell_emb.py
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
def compute_classification(
    adata: AnnData,
    classes: List[str],
    label_decoders: Dict[str, Any],
    labels_hierarchy: Dict[str, Any],
    metric_type: List[str] = ["macro", "micro", "weighted"],
    use_unknown: bool = False,
) -> Dict[str, Dict[str, float]]:
    """
    Compute classification metrics for the given annotated data.

    Args:
        adata (AnnData): The annotated data matrix of shape n_obs x n_vars. Rows correspond to cells and columns to genes.
        classes (List[str]): List of class labels to be used for classification.
        label_decoders (Dict[str, Any]): Dictionary of label decoders for each class.
        labels_hierarchy (Dict[str, Any]): Dictionary representing the hierarchy of labels.
        metric_type (List[str], optional): List of metric types to compute. Defaults to ["macro", "micro", "weighted"].

    Returns:
        Dict[str, Dict[str, float]]: A dictionary containing classification metrics for each class.
    """
    metrics = {}
    for clss in classes:
        res = []
        if clss not in adata.obs.columns:
            print("not in columns")
            continue
        labels_topred = label_decoders[clss].values()
        if clss in labels_hierarchy:
            parentdf = (
                bt.CellType.filter()
                .df(include=["parents__ontology_id", "ontology_id"])
                .set_index("ontology_id")[["parents__ontology_id"]]
            )
            parentdf.parents__ontology_id = parentdf.parents__ontology_id.astype(str)
            class_groupings = {
                k: get_descendants(k, parentdf) for k in set(adata.obs[clss].unique())
            }
        tokeep = np.array([True] * adata.shape[0])
        for i, (pred, true) in enumerate(adata.obs[["pred_" + clss, clss]].values):
            if pred == true:
                res.append(true)
                continue
            if true == "unknown":
                tokeep[i] = False
            if clss in labels_hierarchy:
                if true in class_groupings:
                    if pred == "unknown" and not use_unknown:
                        tokeep[i] = False
                    res.append(true if pred in class_groupings[true] else "")
                    continue
                elif true not in labels_topred:
                    raise ValueError(f"true label {true} not in available classes")
            elif true not in labels_topred:
                raise ValueError(f"true label {true} not in available classes")
            res.append("")
        metrics[clss] = {}
        metrics[clss]["accuracy"] = np.mean(
            np.array(res)[tokeep] == adata.obs[clss].values[tokeep]
        )
        for x in metric_type:
            metrics[clss][x] = f1_score(
                np.array(res)[tokeep], adata.obs[clss].values[tokeep], average=x
            )
    return metrics

compute_corr

Compute the correlation between the output and target matrices.

Parameters:
  • out (ndarray) –

    The output matrix.

  • to (ndarray) –

    The target matrix.

  • doplot (bool, default: True ) –

    Whether to generate a plot of the correlation coefficients. Defaults to True.

  • compute_mean_regress (bool, default: False ) –

    Whether to compute mean regression. Defaults to False.

  • plot_corr_size (int, default: 64 ) –

    The size of the plot for correlation. Defaults to 64.

Returns:
  • dict( dict ) –

    A dictionary containing the computed metrics.

Source code in scprint2/tasks/cell_emb.py
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
def compute_corr(
    out: np.ndarray,
    to: np.ndarray,
    doplot: bool = True,
    compute_mean_regress: bool = False,
    plot_corr_size: int = 64,
) -> dict:
    """
    Compute the correlation between the output and target matrices.

    Args:
        out (np.ndarray): The output matrix.
        to (np.ndarray): The target matrix.
        doplot (bool, optional): Whether to generate a plot of the correlation coefficients. Defaults to True.
        compute_mean_regress (bool, optional): Whether to compute mean regression. Defaults to False.
        plot_corr_size (int, optional): The size of the plot for correlation. Defaults to 64.

    Returns:
        dict: A dictionary containing the computed metrics.
    """
    metrics = {}
    corr_coef, p_value = spearmanr(
        out,
        to.T,
    )
    corr_coef[p_value > 0.05] = 0
    # corr_coef[]
    # only on non zero values,
    # compare a1-b1 corr with a1-b(n) corr. should be higher

    # Plot correlation coefficient
    val = plot_corr_size + 2 if compute_mean_regress else plot_corr_size
    metrics.update(
        {"recons_corr": np.mean(corr_coef[val:, :plot_corr_size].diagonal())}
    )
    if compute_mean_regress:
        metrics.update(
            {
                "mean_regress": np.mean(
                    corr_coef[
                        plot_corr_size : plot_corr_size + 2,
                        :plot_corr_size,
                    ].flatten()
                )
            }
        )
    if doplot:
        plt.figure(figsize=(10, 5))
        plt.imshow(corr_coef, cmap="coolwarm", interpolation="none", vmin=-1, vmax=1)
        plt.colorbar()
        plt.title('Correlation Coefficient of expr and i["x"]')
        plt.show()
    return metrics

default_benchmark

Run the default benchmark for embedding and annotation using the scPRINT model.

Parameters:
  • model (Module) –

    The scPRINT model to be used for embedding and annotation.

  • folder_dir (str, default: FILE_LOC + '/../../data/' ) –

    The directory containing data files.

  • dataset (str, default: FILE_LOC + '/../../data/gNNpgpo6gATjuxTE7CCp.h5ad' ) –

    The dataset to use for benchmarking. Can be a path or URL.

  • do_class (bool, default: True ) –

    Whether to perform classification. Defaults to True.

  • coarse (bool, default: False ) –

    Whether to use coarse cell type annotations. Defaults to False.

Returns:
  • dict( dict ) –

    A dictionary containing the benchmark metrics.

Source code in scprint2/tasks/cell_emb.py
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
def default_benchmark(
    model: torch.nn.Module,
    folder_dir: str = FILE_LOC + "/../../data/",
    dataset: str = FILE_LOC + "/../../data/gNNpgpo6gATjuxTE7CCp.h5ad",
    do_class: bool = True,
    coarse: bool = False,
) -> dict:
    """
    Run the default benchmark for embedding and annotation using the scPRINT model.

    Args:
        model (torch.nn.Module): The scPRINT model to be used for embedding and annotation.
        folder_dir (str, optional): The directory containing data files.
        dataset (str, optional): The dataset to use for benchmarking. Can be a path or URL.
        do_class (bool, optional): Whether to perform classification. Defaults to True.
        coarse (bool, optional): Whether to use coarse cell type annotations. Defaults to False.

    Returns:
        dict: A dictionary containing the benchmark metrics.
    """
    if dataset.startswith("https://"):
        adata = sc.read(
            folder_dir
            + dataset.split("/")[-1]
            + (".h5ad" if not dataset.endswith(".h5ad") else ""),
            backup_url=dataset,
        )
    else:
        adata = sc.read_h5ad(dataset)
    if adata.shape[0] > 100_000:
        adata = adata[
            adata.obs_names[np.random.choice(adata.shape[0], 100_000, replace=False)]
        ]
    max_len = 4000 if adata.X.sum(1).mean() < 50_000 else 8000
    batch_size = 64 if adata.X.sum(1).mean() < 50_000 else 32
    log_every = 10_000
    if dataset.split("/")[-1] in ["24539942", "24539828"]:  # lung and pancreas
        adata.obs["organism_ontology_term_id"] = "NCBITaxon:9606"
        use_layer = "counts"
        is_symbol = True
        batch_key = "tech" if dataset.split("/")[-1] == "24539828" else "batch"
        label_key = "celltype" if dataset.split("/")[-1] == "24539828" else "cell_type"
        adata.obs["cell_type_ontology_term_id"] = adata.obs[label_key].replace(
            COARSE if coarse else FINE
        )
        adata.obs["assay_ontology_term_id"] = adata.obs[batch_key].replace(
            COARSE if coarse else FINE
        )
    else:
        use_layer = None
        is_symbol = False
        batch_key = (
            "batch"
            if dataset.split("/")[-1] == "661d5ec2-ca57-413c-8374-f49b0054ddba.h5ad"
            else "assay_ontology_term_id"
        )
        label_key = "cell_type_ontology_term_id"
    preprocessor = Preprocessor(
        use_layer=use_layer,
        is_symbol=is_symbol,
        force_preprocess=True,
        skip_validate=True,
        do_postp=model.expr_emb_style == "metacell",
        drop_non_primary=False,
    )
    adata = preprocessor(adata.copy())
    if model.expr_emb_style == "metacell":
        sc.pp.neighbors(adata, use_rep="X_pca")
    embedder = Embedder(
        pred_embedding=(
            model.pred_embedding if model.pred_embedding is not None else ["all"]
        ),
        doclass=do_class,
        max_len=max_len,
        doplot=False,
        keep_all_labels_pred=False,
        save_every=log_every,
        batch_size=batch_size,
        how="random expr",
    )
    adata, metrics = embedder(model, adata)

    bm = Benchmarker(
        adata,
        batch_key=batch_key,
        label_key=label_key,
        embedding_obsm_keys=["scprint_emb"],
    )
    bm.benchmark()
    metrics.update(
        {"scib": bm.get_results(min_max_scale=False).T.to_dict()["scprint_emb"]}
    )
    if model.class_scale > 0:
        metrics["classif"] = compute_classification(
            adata, model.classes, model.label_decoders, model.labels_hierarchy
        )
    return metrics

display_confusion_matrix

Display the confusion matrix for true vs predicted cell types.

Parameters:
  • nadata (AnnData) –

    Annotated data object containing predictions and ground truth.

  • pred (str, default: 'conv_pred_cell_type_ontology_term_id' ) –

    Column name for predictions. Defaults to "conv_pred_cell_type_ontology_term_id".

  • true (str, default: 'cell_type' ) –

    Column name for ground truth. Defaults to "cell_type".

Source code in scprint2/tasks/cell_emb.py
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
def display_confusion_matrix(
    nadata, pred="conv_pred_cell_type_ontology_term_id", true="cell_type"
):
    """
    Display the confusion matrix for true vs predicted cell types.

    Args:
        nadata (AnnData): Annotated data object containing predictions and ground truth.
        pred (str): Column name for predictions. Defaults to "conv_pred_cell_type_ontology_term_id".
        true (str): Column name for ground truth. Defaults to "cell_type".
    """
    counts = None
    for k, v in nadata.obs[true].value_counts().items():
        name = k + " - " + str(v)
        if counts is None:
            counts = pd.DataFrame(
                nadata.obs.loc[
                    nadata.obs[true] == k,
                    pred,
                ].value_counts()
            ).rename(columns={"count": name})
        else:
            counts = pd.concat(
                [
                    counts,
                    pd.DataFrame(
                        nadata.obs.loc[
                            nadata.obs[true] == k,
                            pred,
                        ].value_counts(),
                    ).rename(columns={"count": name}),
                ],
                axis=1,
            )
    counts = counts.T
    # Fill NaN values with 0 for visualization
    counts_filled = counts.fillna(0)

    # Create the heatmap
    plt.figure(figsize=(12, 10))

    # Convert to percentages (row-wise normalization)
    counts_percentage = counts_filled.div(counts_filled.sum(axis=1), axis=0) * 100
    counts_percentage = counts_percentage.iloc[:, counts_percentage.values.max(0) > 5]

    ax = sns.heatmap(
        counts_percentage,
        cmap="Blues",
        cbar_kws={"label": "Percentage (%)"},
        linewidths=0.5,
        square=True,
    )
    # place the x-label on top
    ax.xaxis.set_label_position("top")
    ax.xaxis.tick_top()

    plt.title(
        "Confusion Matrix: " + true + " vs " + pred + " (Percentage)",
        fontsize=16,
        pad=20,
    )
    ax.set_xlabel(pred, fontsize=12)
    ax.set_ylabel(true + " (with counts)", fontsize=12)
    ax.set_xticklabels(ax.get_xticklabels(), rotation=45, ha="left", fontsize=12)
    ax.set_yticklabels(ax.get_yticklabels(), rotation=0, fontsize=14)
    plt.tight_layout()
    plt.show()

scprint2.tasks.grn

Classes:

Name Description
GNInfer

Functions:

Name Description
default_benchmark

default_benchmark function to run the default scPRINT GRN benchmark

GNInfer

GNInfer a class to infer gene regulatory networks from a dataset using a scPRINT model.

Parameters:
  • batch_size (int, default: 64 ) –

    Batch size for processing. Defaults to 64.

  • num_workers (int, default: 8 ) –

    Number of workers for data loading. Defaults to 8.

  • drop_unexpressed (bool, default: True ) –

    Whether to drop unexpressed genes. Defaults to True. In this context, genes that have no expression in the dataset are dropped.

  • num_genes (int, default: 3000 ) –

    Number of genes to consider. Defaults to 3000.

  • max_cells (int, default: 0 ) –

    Maximum number of cells to consider. Defaults to 0. if less than total number of cells, only the top max_cells cells with the most counts will be considered.

  • cell_type_col (str, default: 'cell_type' ) –

    Column name for cell type information. Defaults to "cell_type".

  • how (str, default: 'random expr' ) –

    Method to select genes. Options are "most var", "random expr", "some". Defaults to "most var". - "most var across": select the most variable genes across all cell types - "most var within": select the most variable genes within a cell type - "random expr": select random expressed genes - "some": select a subset of genes defined in genelist - "most expr": select the most expressed genes in the cell type

  • genelist (list, default: None ) –

    List of genes to consider. Defaults to an empty list.

  • layer (Optional[List[int]], default: None ) –

    List of layers to use for the inference. Defaults to None.

  • preprocess (str, default: 'softmax' ) –

    Preprocessing method. Options are "softmax", "sinkhorn", "none". Defaults to "softmax".

  • head_agg (str, default: 'mean' ) –

    Aggregation method for heads. Options are "mean_full", "mean", "sum", "none". Defaults to "mean".

  • filtration (str, default: 'thresh' ) –

    Filtration method for the adjacency matrix. Options are "thresh", "top-k", "mst", "known", "none". Defaults to "thresh".

  • k (int, default: 10 ) –

    Number of top connections to keep if filtration is "top-k". Defaults to 10.

  • known_grn (optional, default: None ) –

    Known gene regulatory network to use as a reference. Defaults to None. - We will only keep the genes that are present in the known GRN.

  • precomp_attn (bool, default: False ) –

    Whether to let the model precompute attn or do it at the end. This takes more memory but the model can compute mean over the attention matrices instead of over the qs and ks then taking the product. It is required for mean_full head_agg. Defaults to False.

  • symmetrize (bool, default: False ) –

    Whether to GRN. Defaults to False.

  • loc (str, default: './' ) –

    Location to save results. Defaults to "./".

  • use_knn (bool, default: True ) –

    Whether to use k-nearest neighbors information. Defaults to True.

Methods:

Name Description
__call__

call runs the method

aggregate

part to aggregate the qks and compute the attns

filter

part to filter the attn matrix given user inputs

predict

part to predict the qks or attns matrices from the adata with the model

Source code in scprint2/tasks/grn.py
 37
 38
 39
 40
 41
 42
 43
 44
 45
 46
 47
 48
 49
 50
 51
 52
 53
 54
 55
 56
 57
 58
 59
 60
 61
 62
 63
 64
 65
 66
 67
 68
 69
 70
 71
 72
 73
 74
 75
 76
 77
 78
 79
 80
 81
 82
 83
 84
 85
 86
 87
 88
 89
 90
 91
 92
 93
 94
 95
 96
 97
 98
 99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
def __init__(
    self,
    batch_size: int = 64,
    num_workers: int = 8,
    drop_unexpressed: bool = True,
    num_genes: int = 3000,
    max_cells: int = 0,
    cell_type_col: str = "cell_type",
    how: str = "random expr",  # random expr, most var within, most var across, some
    genelist: Optional[List[str]] = None,
    layer: Optional[List[int]] = None,
    preprocess: str = "softmax",  # sinkhorn, softmax, none
    head_agg: str = "mean",  # mean, sum, none, mean_full
    filtration: str = "thresh",  # thresh, top-k, mst, known, none
    k: int = 10,
    known_grn: Optional[Any] = None,
    precomp_attn: bool = False,
    symmetrize: bool = False,
    loc: str = "./",
    use_knn: bool = True,
):
    """
    GNInfer a class to infer gene regulatory networks from a dataset using a scPRINT model.

    Args:
        batch_size (int, optional): Batch size for processing. Defaults to 64.
        num_workers (int, optional): Number of workers for data loading. Defaults to 8.
        drop_unexpressed (bool, optional): Whether to drop unexpressed genes. Defaults to True.
            In this context, genes that have no expression in the dataset are dropped.
        num_genes (int, optional): Number of genes to consider. Defaults to 3000.
        max_cells (int, optional): Maximum number of cells to consider. Defaults to 0.
            if less than total number of cells, only the top `max_cells` cells with the most counts will be considered.
        cell_type_col (str, optional): Column name for cell type information. Defaults to "cell_type".
        how (str, optional): Method to select genes. Options are "most var", "random expr", "some". Defaults to "most var".
            - "most var across": select the most variable genes across all cell types
            - "most var within": select the most variable genes within a cell type
            - "random expr": select random expressed genes
            - "some": select a subset of genes defined in genelist
            - "most expr": select the most expressed genes in the cell type
        genelist (list, optional): List of genes to consider. Defaults to an empty list.
        layer (Optional[List[int]], optional): List of layers to use for the inference. Defaults to None.
        preprocess (str, optional): Preprocessing method. Options are "softmax", "sinkhorn", "none". Defaults to "softmax".
        head_agg (str, optional): Aggregation method for heads. Options are "mean_full", "mean", "sum", "none". Defaults to "mean".
        filtration (str, optional): Filtration method for the adjacency matrix. Options are "thresh", "top-k", "mst", "known", "none". Defaults to "thresh".
        k (int, optional): Number of top connections to keep if filtration is "top-k". Defaults to 10.
        known_grn (optional): Known gene regulatory network to use as a reference. Defaults to None.
            - We will only keep the genes that are present in the known GRN.
        precomp_attn (bool, optional): Whether to let the model precompute attn or do it at the end.
            This takes more memory but the model can compute mean over the attention matrices instead
            of over the qs and ks then taking the product.
            It is required for mean_full head_agg. Defaults to False.
        symmetrize (bool, optional): Whether to GRN. Defaults to False.
        loc (str, optional): Location to save results. Defaults to "./".
        use_knn (bool, optional): Whether to use k-nearest neighbors information. Defaults to True.
    """
    self.batch_size = batch_size
    self.num_workers = num_workers
    self.layer = layer
    self.loc = loc
    self.how = how
    assert self.how in [
        "most var within",
        "most var across",
        "random expr",
        "some",
        "most expr",
    ], "how must be one of 'most var within', 'most var across', 'random expr', 'some', 'most expr'"
    self.num_genes = num_genes if self.how != "some" else len(self.genelist)
    self.preprocess = preprocess
    self.cell_type_col = cell_type_col
    self.filtration = filtration
    self.genelist = genelist if genelist is not None else []
    self.k = k
    self.symmetrize = symmetrize
    self.known_grn = known_grn
    self.head_agg = head_agg
    self.max_cells = max_cells
    self.curr_genes = None
    self.drop_unexpressed = drop_unexpressed
    self.use_knn = use_knn
    if self.filtration != "none" and self.head_agg == "none":
        raise ValueError("filtration must be 'none' when head_agg is 'none'")

__call__

call runs the method

Parameters:
  • model (Module) –

    The model to be used for generating the network

  • adata (AnnData) –

    Annotated data matrix of shape n_obs × n_vars. n_obs is the number of cells and n_vars is the number of genes.

  • cell_type (str, default: None ) –

    Specific cell type to filter the data. Defaults to None.

Returns:
  • AnnData( AnnData ) –

    Annotated data matrix with predictions and annotations.

  • ndarray

    np.ndarray: Filtered adjacency matrix.

Source code in scprint2/tasks/grn.py
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
def __call__(self, model: torch.nn.Module, adata: AnnData, cell_type=None) -> tuple[AnnData, np.ndarray]:
    """
    __call__ runs the method

    Args:
        model (torch.nn.Module): The model to be used for generating the network
        adata (AnnData): Annotated data matrix of shape `n_obs` × `n_vars`. `n_obs` is the number of cells and `n_vars` is the number of genes.
        cell_type (str, optional): Specific cell type to filter the data. Defaults to None.

    Returns:
        AnnData: Annotated data matrix with predictions and annotations.
        np.ndarray: Filtered adjacency matrix.
    """
    # Add at least the organism you are working with
    if self.layer is None:
        self.layer = list(range(model.nlayers))
    self.n_cell_embs = model.attn.additional_tokens
    subadata = self.predict(model, adata, self.layer, cell_type)
    adjacencies = self.aggregate(model)
    model.attn.data = None
    if self.head_agg == "none":
        return self.save(
            adjacencies[self.n_cell_embs :, self.n_cell_embs :, :],
            subadata,
        )
    else:
        return self.save(
            self.filter(adjacencies)[self.n_cell_embs :, self.n_cell_embs :],
            subadata,
            loc=self.loc,
        )

aggregate

part to aggregate the qks and compute the attns or to aggregate the attns or do nothing if already done

Source code in scprint2/tasks/grn.py
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
def aggregate(self, model):
    """
    part to aggregate the qks and compute the attns
    or to aggregate the attns
    or do nothing if already done
    """
    attn, genes = model.attn.get(), model.genes
    if model.attn.precomp_attn:
        self.curr_genes = [i for i in genes if i in self.curr_genes]
        return attn.detach().cpu().numpy()
    if self.how == "random expr" and self.drop_unexpressed:
        keep = np.array(
            [1] * self.n_cell_embs + [i in self.curr_genes for i in genes],
            dtype=bool,
        )
        attn = attn[:, keep, :, :, :]
    badloc = torch.isnan(attn.sum((0, 2, 3, 4)))
    attn = attn[:, ~badloc, :, :, :]
    badloc = badloc.detach().cpu().numpy()
    self.curr_genes = (
        np.array(self.curr_genes)[~badloc[self.n_cell_embs :]]
        if self.how == "random expr"
        else [i for i in genes if i in self.curr_genes]
    )
    # attn = attn[:, :, 0, :, :].permute(0, 2, 1, 3) @ attn[:, :, 1, :, :].permute(
    #    0, 2, 3, 1
    # )
    attns = None
    Qs = (
        attn[:, :, 0, :, :]
        .permute(0, 2, 1, 3)
        .reshape(-1, attn.shape[1], attn.shape[-1])
    )
    Ks = (
        attn[:, :, 1, :, :]
        .permute(0, 2, 1, 3)
        .reshape(-1, attn.shape[1], attn.shape[-1])
    )
    for i in range(Qs.shape[0]):
        attn = Qs[i] @ Ks[i].T
        # return attn

        if self.preprocess == "sinkhorn":
            scale = Qs.shape[-1] ** -0.5
            attn = attn * scale
            if attn.numel() > 100_000_000:
                raise ValueError("you can't sinkhorn such a large matrix")
            sink = SinkhornDistance(0.1, max_iter=200)
            attn = sink(attn)[0]
            attn = attn * Qs.shape[-1]
        elif self.preprocess == "softmax":
            scale = Qs.shape[-1] ** -0.5
            attn = attn * scale
            attn = torch.nn.functional.softmax(attn, dim=-1)
        elif self.preprocess == "softpick":
            attn = softpick(attn)
        elif self.preprocess == "none":
            pass
        else:
            raise ValueError(
                "preprocess must be one of 'sinkhorn', 'softmax', 'none'"
            )
        if self.symmetrize:
            attn = (attn + attn.T) / 2
        if self.head_agg == "mean":
            attns = attn + (attns if attns is not None else 0)
        elif self.head_agg == "max":
            attns = torch.max(attn, attns) if attns is not None else attn
        elif self.head_agg == "none":
            attn = attn.reshape(attn.shape[0], attn.shape[1], 1)
            if attns is not None:
                attns = torch.cat((attns, attn.detach().cpu()), dim=2)
            else:
                attns = attn.detach().cpu()
        else:
            raise ValueError(
                "head_agg must be one of 'mean', 'mean_full', 'max' or 'none'"
            )
    if self.head_agg == "mean":
        attns = attns / Qs.shape[0]
    return (
        attns.detach().cpu().numpy() if self.head_agg != "none" else attns.numpy()
    )

filter

part to filter the attn matrix given user inputs

Source code in scprint2/tasks/grn.py
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
def filter(self, adj, gt=None):
    """
    part to filter the attn matrix given user inputs
    """
    if self.filtration == "thresh":
        adj[adj < (1 / adj.shape[-1])] = 0
        res = (adj != 0).sum()
        if res / adj.shape[0] ** 2 < 0.01:
            adj = scipy.sparse.csr_matrix(adj)
    elif self.filtration == "none":
        pass
    elif self.filtration == "top-k":
        args = np.argsort(adj)
        adj[np.arange(adj.shape[0])[:, None], args[:, : -self.k]] = 0
        adj = scipy.sparse.csr_matrix(adj)
    elif self.filtration == "known" and gt is not None:
        gt = gt.reindex(sorted(gt.columns), axis=1)
        gt = gt.reindex(sorted(gt.columns), axis=0)
        gt = gt[gt.index.isin(self.curr_genes)].iloc[
            :, gt.columns.isin(self.curr_genes)
        ]

        loc = np.isin(self.curr_genes, gt.index)
        self.curr_genes = np.array(self.curr_genes)[loc]
        adj = adj[self.n_cell_embs :, self.n_cell_embs :][loc][:, loc]
        adj[gt.values != 1] = 0
        adj = scipy.sparse.csr_matrix(adj)
    elif self.filtration == "tmfg":
        adj = nx.to_scipy_sparse_array(tmfg(adj))
    elif self.filtration == "mst":
        pass
    else:
        raise ValueError("filtration must be one of 'thresh', 'none' or 'top-k'")
    res = (adj != 0).sum() if self.filtration != "none" else adj.shape[0] ** 2
    print(f"avg link count: {res}, sparsity: {res / adj.shape[0] ** 2}")
    return adj

predict

part to predict the qks or attns matrices from the adata with the model

Source code in scprint2/tasks/grn.py
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
def predict(self, model, adata, layer, cell_type=None):
    """
    part to predict the qks or attns matrices from the adata with the model
    """
    self.curr_genes = None
    model.pred_log_adata = False
    if cell_type is not None:
        subadata = adata[adata.obs[self.cell_type_col] == cell_type].copy()
    else:
        subadata = adata.copy()
    if self.how == "most var within":
        try:
            sc.pp.highly_variable_genes(
                subadata, flavor="seurat_v3", n_top_genes=self.num_genes
            )
        except ValueError:
            sc.pp.highly_variable_genes(
                subadata,
                flavor="seurat_v3",
                n_top_genes=self.num_genes,
                span=0.6,
            )
        self.curr_genes = (
            subadata.var.index[subadata.var.highly_variable].tolist()
            + self.genelist
        )
        print(
            "number of expressed genes in this cell type: "
            + str((subadata.X.sum(0) > 1).sum())
        )
    elif self.how == "most var across" and cell_type is not None:
        adata.raw = adata
        sc.tl.rank_genes_groups(
            adata,
            mask_var=adata.var.index.isin(model.genes),
            groupby=self.cell_type_col,
            groups=[cell_type],
        )
        diff_expr_genes = adata.uns["rank_genes_groups"]["names"][cell_type]
        diff_expr_genes = [gene for gene in diff_expr_genes if gene in model.genes]
        self.curr_genes = diff_expr_genes[: self.num_genes] + self.genelist
        self.curr_genes.sort()
    elif self.how == "random expr":
        self.curr_genes = model.genes
        # raise ValueError("cannot do it yet")
        pass
    elif self.how == "some" and len(self.genelist) > 0:
        self.curr_genes = self.genelist
    elif self.how == "most expr":
        self.curr_genes = adata.var.index[
            adata.X.sum(0).A1.argsort()[::-1]
        ].tolist()[: self.num_genes]
    else:
        raise ValueError("something wrong with your inputs")
    if self.drop_unexpressed:
        expr = subadata.var[(subadata.X.sum(0) > 0).tolist()[0]].index.tolist()
        self.curr_genes = [i for i in self.curr_genes if i in expr]
    # Order cells by total count
    cell_sums = subadata.X.sum(axis=1)
    order = np.argsort(
        -cell_sums.A1 if scipy.sparse.issparse(subadata.X) else -cell_sums
    )
    subadata = subadata[order].copy()
    subadata = subadata[: self.max_cells] if self.max_cells else subadata
    if len(subadata) == 0:
        raise ValueError("no cells in the dataset")
    adataset = SimpleAnnDataset(
        subadata,
        obs_to_output=["organism_ontology_term_id"],
        get_knn_cells=model.expr_emb_style == "metacell" and self.use_knn,
    )
    col = Collator(
        organisms=model.organisms,
        valid_genes=model.genes,
        max_len=self.num_genes if self.how == "random expr" else 0,
        how="some" if self.how != "random expr" else "random expr",
        genelist=self.curr_genes if self.how != "random expr" else [],
        n_bins=model.n_input_bins if model.expr_emb_style == "binned" else 0,
    )
    dataloader = DataLoader(
        adataset,
        collate_fn=col,
        batch_size=self.batch_size,
        num_workers=self.num_workers,
        shuffle=False,
    )
    model.attn.precomp_attn = self.head_agg == "mean_full"
    if self.num_genes > 10_000 and model.attn.precomp_attn:
        raise ValueError("need less genes for a non-shared-qk version")
    prevplot = model.doplot

    model.doplot = False
    model.on_predict_epoch_start()
    model.eval()
    model.attn.data = None
    # reparametrize the attn process

    if model.transformer.attn_type == "hyper":
        self.curr_genes = [i for i in model.genes if i in self.curr_genes]
        num = (1 if model.use_metacell_token else 0) + (
            (len(model.classes) + 1) if not model.cell_transformer else 0
        )
        if (len(self.curr_genes) + num) % 128 != 0:
            self.curr_genes = self.curr_genes[
                : (len(self.curr_genes) // 128 * 128) - num
            ]
    if self.how != "random expr":
        if model.attn.precomp_attn:
            model.attn.gene_dim = len(set(self.curr_genes) & set(model.genes))
            model.attn.apply_softmax = self.preprocess == "softmax"
        else:
            if subadata.obs["organism_ontology_term_id"].unique().shape[0] > 1:
                raise ValueError(
                    "only one organism at a time is supported for precomp_attn"
                )
            n = False
            for i, k in col.start_idx.items():
                if n:
                    model.attn.gene_dim = k - model.attn.speciesloc
                    break
                if i == subadata.obs["organism_ontology_term_id"].unique()[0]:
                    model.attn.speciesloc = k
                    n = True
    elif not model.attn.precomp_attn:
        raise ValueError(
            "full attention (i.e. precomp_attn=True) is not supported for random expr"
        )
    device = model.device.type
    # this is a debugger line
    dtype = (
        torch.float16
        if isinstance(model.transformer, FlashTransformer)
        else model.dtype
    )
    with torch.no_grad(), torch.autocast(device_type=device, dtype=dtype):
        for batch in tqdm(dataloader):
            gene_pos, expression, depth = (
                batch["genes"].to(device),
                batch["x"].to(device),
                batch["depth"].to(device),
            )
            model._predict(
                gene_pos,
                expression,
                depth,
                knn_cells=(
                    batch["knn_cells"].to(device)
                    if model.expr_emb_style == "metacell" and self.use_knn
                    else None
                ),
                knn_cells_info=(
                    batch["knn_cells_info"].to(device)
                    if model.expr_emb_style == "metacell" and self.use_knn
                    else None
                ),
                keep_output=False,
                get_attention_layer=layer if type(layer) is list else [layer],
            )
            torch.cuda.empty_cache()
    model.doplot = prevplot
    return subadata

default_benchmark

default_benchmark function to run the default scPRINT GRN benchmark

Parameters:
  • model (Any) –

    The scPRINT model to be used for the benchmark.

  • default_dataset (str, default: 'sroy' ) –

    The default dataset to use for benchmarking. Defaults to "sroy".

  • cell_types (List[str], default: [] ) –

    List of cell types to include in the benchmark. Defaults to [].

  • maxlayers (int, default: 16 ) –

    Maximum number of layers to use from the model. Defaults to 16.

  • maxgenes (int, default: 5000 ) –

    Maximum number of genes to consider. Defaults to 5000.

  • batch_size (int, default: 32 ) –

    Batch size for processing. Defaults to 32.

  • maxcells (int, default: 1024 ) –

    Maximum number of cells to consider. Defaults to 1024.

Returns:
  • dict( dict ) –

    A dictionary containing the benchmark metrics.

Source code in scprint2/tasks/grn.py
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
756
757
758
759
760
761
762
763
764
765
766
767
768
769
770
771
772
773
774
775
776
777
778
779
780
781
782
783
784
785
786
787
788
789
790
791
792
793
794
795
796
797
798
799
800
801
802
803
804
805
806
807
808
809
810
811
812
813
814
815
816
817
818
819
820
821
822
823
824
825
826
827
828
829
830
831
832
833
834
835
836
837
838
839
840
841
842
843
844
845
def default_benchmark(
    model: Any,
    default_dataset: str = "sroy",
    cell_types: List[str] = [],
    maxlayers: int = 16,
    maxgenes: int = 5000,
    batch_size: int = 32,
    maxcells: int = 1024,
) -> dict:
    """
    default_benchmark function to run the default scPRINT GRN benchmark

    Args:
        model (Any): The scPRINT model to be used for the benchmark.
        default_dataset (str, optional): The default dataset to use for benchmarking. Defaults to "sroy".
        cell_types (List[str], optional): List of cell types to include in the benchmark. Defaults to [].
        maxlayers (int, optional): Maximum number of layers to use from the model. Defaults to 16.
        maxgenes (int, optional): Maximum number of genes to consider. Defaults to 5000.
        batch_size (int, optional): Batch size for processing. Defaults to 32.
        maxcells (int, optional): Maximum number of cells to consider. Defaults to 1024.

    Returns:
        dict: A dictionary containing the benchmark metrics.
    """
    metrics = {}
    layers = list(range(model.nlayers))[max(0, model.nlayers - maxlayers) :]
    clf_omni = None
    if default_dataset == "sroy":
        preprocessor = Preprocessor(
            is_symbol=True,
            force_preprocess=True,
            skip_validate=True,
            do_postp=model.expr_emb_style == "metacell",
            min_valid_genes_id=5000,
            min_dataset_size=64,
            keepdata=True,
        )
        clf_self = None
        todo = [
            ("han", "human", "full"),
            ("mine", "human", "full"),
            ("han", "human", "chip"),
            ("han", "human", "ko"),
            ("tran", "mouse", "full"),
            ("zhao", "mouse", "full"),
            ("tran", "mouse", "chip"),
            ("tran", "mouse", "ko"),
        ]
        for da, spe, gt in todo:
            if gt != "full":
                continue
            if "NCBITaxon:10090" not in model.organisms and spe == "mouse":
                continue
            print(da + "_" + gt)
            preadata = get_sroy_gt(get=da, species=spe, gt=gt)
            adata = preprocessor(preadata.copy())
            if model.expr_emb_style == "metacell":
                sc.pp.neighbors(adata, use_rep="X_pca")
            grn_inferer = GNInfer(
                layer=layers,
                how="most var within",
                preprocess=(
                    "softpick"
                    if model.attention in ["softpick", "softpick-flash"]
                    else "softmax"
                ),
                head_agg="none",
                filtration="none",
                num_genes=maxgenes,
                num_workers=8,
                max_cells=maxcells,
                batch_size=batch_size,
            )
            grn = grn_inferer(model, adata)
            grn.varp["all"] = grn.varp["GRN"]
            grn.var["ensembl_id"] = grn.var.index
            grn.var["symbol"] = make_index_unique(grn.var["symbol"].astype(str))
            grn.var.index = grn.var["symbol"]
            grn.varp["GRN"] = grn.varp["all"].mean(-1).T
            metrics["mean_" + da + "_" + gt] = BenGRN(
                grn, do_auc=True, doplot=False
            ).compare_to(other=preadata)
            grn.varp["GRN"] = grn.varp["GRN"].T
            if spe == "human":
                metrics["mean_" + da + "_" + gt + "_base"] = BenGRN(
                    grn, do_auc=True, doplot=False
                ).scprint_benchmark()

            ## OMNI
            if clf_omni is None:
                grn.varp["GRN"] = grn.varp["all"]
                _, m, clf_omni = train_classifier(
                    grn,
                    C=1,
                    train_size=0.9,
                    class_weight={1: 800, 0: 1},
                    shuffle=True,
                    return_full=False,
                )
                joblib.dump(clf_omni, "clf_omni.pkl")
                metrics["omni_classifier"] = m
            coef = clf_omni.coef_[0] if clf_omni.coef_.shape[0] == 1 else clf_omni.coef_
            grn.varp["GRN"] = grn.varp["all"][:, :, coef > 0].mean(-1)
            if spe == "human":
                metrics["omni_" + da + "_" + gt + "_base"] = BenGRN(
                    grn, do_auc=True, doplot=True
                ).scprint_benchmark()
            grn.varp["GRN"] = grn.varp["GRN"].T
            metrics["omni_" + da + "_" + gt] = BenGRN(
                grn, do_auc=True, doplot=False
            ).compare_to(other=preadata)

            ## SELF
            if clf_self is None:
                grn.varp["GRN"] = np.transpose(grn.varp["all"], (1, 0, 2))
                _, m, clf_self = train_classifier(
                    grn,
                    other=preadata,
                    C=1,
                    train_size=0.5,
                    class_weight={1: 40, 0: 1},
                    shuffle=False,
                    return_full=False,
                )
                metrics["self_classifier"] = m
            coef = clf_self.coef_[0] if clf_self.coef_.shape[0] == 1 else clf_self.coef_
            grn.varp["GRN"] = grn.varp["all"][:, :, coef > 0].mean(-1).T
            metrics["self_" + da + "_" + gt] = BenGRN(
                grn, do_auc=True, doplot=False
            ).compare_to(other=preadata)
            if spe == "human":
                grn.varp["GRN"] = grn.varp["GRN"].T
                metrics["self_" + da + "_" + gt + "_base"] = BenGRN(
                    grn, do_auc=True, doplot=True
                ).scprint_benchmark()

            ## chip / ko
            if (da, spe, "chip") in todo:
                preadata = get_sroy_gt(get=da, species=spe, gt="chip")
                grn.varp["GRN"] = grn.varp["all"].mean(-1).T
                metrics["mean_" + da + "_" + "chip"] = BenGRN(
                    grn, do_auc=True, doplot=False
                ).compare_to(other=preadata)
                grn.varp["GRN"] = grn.varp["all"][:, :, coef > 0].mean(-1).T
                metrics["omni_" + da + "_" + "chip"] = BenGRN(
                    grn, do_auc=True, doplot=False
                ).compare_to(other=preadata)

                grn.varp["GRN"] = grn.varp["all"][:, :, coef > 0].mean(-1).T
                metrics["self_" + da + "_" + "chip"] = BenGRN(
                    grn, do_auc=True, doplot=False
                ).compare_to(other=preadata)
            if (da, spe, "ko") in todo:
                preadata = get_sroy_gt(get=da, species=spe, gt="ko")
                grn.varp["GRN"] = grn.varp["all"].mean(-1).T
                metrics["mean_" + da + "_" + "ko"] = BenGRN(
                    grn, do_auc=True, doplot=False
                ).compare_to(other=preadata)
                grn.varp["GRN"] = grn.varp["all"][:, :, coef > 0].mean(-1).T
                metrics["omni_" + da + "_" + "ko"] = BenGRN(
                    grn, do_auc=True, doplot=False
                ).compare_to(other=preadata)
                grn.varp["GRN"] = grn.varp["all"][:, :, coef > 0].mean(-1).T
                metrics["self_" + da + "_" + "ko"] = BenGRN(
                    grn, do_auc=True, doplot=False
                ).compare_to(other=preadata)
            del grn
    elif default_dataset == "gwps":
        adata = get_perturb_gt()
        preprocessor = Preprocessor(
            force_preprocess=True,
            keepdata=True,
            skip_validate=True,
            do_postp=model.expr_emb_style == "metacell",
            min_valid_genes_id=maxgenes,
            min_dataset_size=64,
        )
        nadata = preprocessor(adata.copy())
        if model.expr_emb_style == "metacell":
            sc.pp.neighbors(nadata, use_rep="X_pca")
        nadata.var["isTF"] = False
        nadata.var.loc[nadata.var.gene_name.isin(grnutils.TF), "isTF"] = True
        nadata.var["isTF"].sum()
        grn_inferer = GNInfer(
            layer=layers,
            how="most var within",
            preprocess=(
                "softpick"
                if model.attention in ["softpick", "softpick-flash"]
                else "softmax"
            ),
            head_agg="none",
            filtration="none",
            num_genes=maxgenes,
            max_cells=maxcells,
            num_workers=8,
            batch_size=batch_size,
        )
        grn = grn_inferer(model, nadata)
        del nadata
        grn.varp["all"] = grn.varp["GRN"]

        grn.varp["GRN"] = grn.varp["all"].mean(-1).T
        metrics["mean"] = BenGRN(grn, do_auc=True, doplot=False).compare_to(other=adata)
        grn.var["ensembl_id"] = grn.var.index
        grn.var.index = grn.var["symbol"]
        grn.varp["GRN"] = grn.varp["all"].mean(-1)
        metrics["mean_base"] = BenGRN(
            grn, do_auc=True, doplot=False
        ).scprint_benchmark()

        grn.varp["GRN"] = grn.varp["all"]
        grn.var.index = grn.var["ensembl_id"]
        _, m, clf_omni = train_classifier(
            grn,
            C=1,
            train_size=0.9,
            class_weight={1: 800, 0: 1},
            shuffle=True,
            doplot=False,
            return_full=False,
            use_col="gene_name",
        )
        coef = clf_omni.coef_[0] if clf_omni.coef_.shape[0] == 1 else clf_omni.coef_
        grn.varp["GRN"] = grn.varp["all"][:, :, coef > 0].mean(-1).T
        metrics["omni"] = BenGRN(grn, do_auc=True, doplot=False).compare_to(other=adata)
        metrics["omni_classifier"] = m
        grn.var.index = grn.var["symbol"]
        grn.varp["GRN"] = grn.varp["GRN"].T
        metrics["omni_base"] = BenGRN(
            grn, do_auc=True, doplot=False
        ).scprint_benchmark()
        grn.varp["GRN"] = np.transpose(grn.varp["all"], (1, 0, 2))
        grn.var.index = grn.var["ensembl_id"]
        _, m, clf_self = train_classifier(
            grn,
            other=adata,
            C=1,
            train_size=0.5,
            class_weight={1: 40, 0: 1},
            doplot=False,
            shuffle=False,
            return_full=False,
            use_col="ensembl_id",
        )
        coef = clf_self.coef_[0] if clf_self.coef_.shape[0] == 1 else clf_self.coef_
        grn.varp["GRN"] = grn.varp["all"][:, :, coef > 0].mean(-1).T
        metrics["self"] = BenGRN(grn, do_auc=True, doplot=False).compare_to(other=adata)
        metrics["self_classifier"] = m
        grn.var.index = grn.var["symbol"]
        grn.varp["GRN"] = grn.varp["GRN"].T
        metrics["self_base"] = BenGRN(
            grn, do_auc=True, doplot=False
        ).scprint_benchmark()
    elif default_dataset == "genernib":
        raise ValueError("Not implemented")
        # for adata in [NORMAN, OP, ADAMSON]:
        #   adata = sc.read_h5ad(adata)
        #   adata.obs["organism_ontology_term_id"] = "NCBITaxon:9606"
        #   preprocessor = Preprocessor(
        #       force_preprocess=False,
        #       skip_validate=True,
        #       drop_non_primary=False,
        #       do_postp=False,
        #       min_valid_genes_id=1000,
        #       min_dataset_size=64,
        #       keepdata=True,
        #       is_symbol=True,
        #       use_raw=False,
        #   )
        #   adata = preprocessor(adata.copy())
        #   run_gene_rnib(
        #      adata=adata,
        #      model=model,
        #      layer=layers,
        #      how="most var within",
        #      preprocess="softmax",
        #   )
        #   grn_inferer = GNInfer(
        #      how="most var across",
        #      preprocess="softmax",
        #      head_agg="mean",
        #      filtration="none",
        #      forward_mode="none",
        #      num_genes=3_000,
        #      max_cells=3000,
        #      batch_size=10,
        #      cell_type_col="perturbation",
        #      layer=list(range(model.nlayers))[:],
        #   )
        # grn = grn_inferer(model, adata, cell_type="ctrl")
        # grn.var.index = make_index_unique(grn.var["symbol"].astype(str))

    else:
        # max_genes=4000
        if default_dataset.startswith("https://"):
            adata = sc.read(
                FILEDIR + "/../../data/" + default_dataset.split("/")[-1],
                backup_url=default_dataset,
            )
        else:
            adata = sc.read_h5ad(default_dataset)
        if default_dataset.split("/")[-1] in ["yBCKp6HmXuHa0cZptMo7.h5ad"]:
            use_layer = "counts"
            is_symbol = True
        else:
            use_layer = None
            is_symbol = False

        preprocessor = Preprocessor(
            use_layer=use_layer,
            is_symbol=is_symbol,
            force_preprocess=True,
            skip_validate=True,
            do_postp=model.expr_emb_style == "metacell",
            drop_non_primary=False,
        )
        adata = preprocessor(adata.copy())

        adata.var["isTF"] = False
        adata.var.loc[adata.var.symbol.isin(grnutils.TF), "isTF"] = True
        if model.expr_emb_style == "metacell":
            if "X_pca" not in adata.obsm:
                sc.pp.pca(adata, n_comps=50)
            sc.pp.neighbors(adata, use_rep="X_pca")
        for celltype in list(adata.obs["cell_type"].unique())[:14]:
            # print(celltype)
            # grn_inferer = GNInfer(
            #    layer=layers,
            #    how="random expr",
            #    preprocess="softmax",
            #    head_agg="max",
            #    filtration="none",
            #    num_workers=8,
            #    num_genes=2200,
            #    max_cells=maxcells,
            #    batch_size=batch_size,
            # )
            #
            # grn = grn_inferer(model, adata[adata.X.sum(1) > 500], cell_type=celltype)
            # grn.var.index = make_index_unique(grn.var["symbol"].astype(str))
            # metrics[celltype + "_scprint"] = BenGRN(
            #    grn, doplot=False
            # ).scprint_benchmark()
            # del grn
            # gc.collect()
            grn_inferer = GNInfer(
                layer=layers,
                how="most var across",
                preprocess=(
                    "softpick"
                    if model.attention in ["softpick", "softpick-flash"]
                    else "softmax"
                ),
                head_agg="none",
                filtration="none",
                num_workers=8,
                num_genes=maxgenes,
                max_cells=maxcells,
                batch_size=batch_size,
            )
            grn = grn_inferer(model, adata[adata.X.sum(1) > 500], cell_type=celltype)
            grn.var.index = make_index_unique(grn.var["symbol"].astype(str))
            grn.varp["all"] = grn.varp["GRN"]
            grn.varp["GRN"] = grn.varp["GRN"].mean(-1)
            metrics[celltype + "_scprint_mean"] = BenGRN(
                grn, doplot=False
            ).scprint_benchmark()
            if clf_omni is None:
                grn.varp["GRN"] = grn.varp["all"]
                _, m, clf_omni = train_classifier(
                    grn,
                    C=1,
                    train_size=0.6,
                    max_iter=300,
                    class_weight={1: 800, 0: 1},
                    return_full=False,
                    shuffle=True,
                    doplot=False,
                )
                joblib.dump(clf_omni, "clf_omni.pkl")
                metrics["classifier"] = m
            coef = clf_omni.coef_[0] if clf_omni.coef_.shape[0] == 1 else clf_omni.coef_
            grn.varp["GRN"] = grn.varp["all"][:, :, coef > 0].mean(-1)
            metrics[celltype + "_scprint_class"] = BenGRN(
                grn, doplot=False
            ).scprint_benchmark()
            del grn
            gc.collect()
    return metrics

scprint2.tasks.denoise

Classes:

Name Description
Denoiser

Functions:

Name Description
default_benchmark

default_benchmark function used to run the default denoising benchmark of scPRINT

split_molecules

Splits molecules into two (potentially overlapping) groups.

Denoiser

Denoiser class for denoising scRNA-seq data using a scPRINT model

Parameters:
  • batch_size (int, default: 10 ) –

    Batch size for processing. Defaults to 10.

  • num_workers (int, default: 1 ) –

    Number of workers for data loading. Defaults to 1.

  • max_len (int, default: 5000 ) –

    Maximum number of genes to consider. Defaults to 5000.

  • how (str, default: 'most var' ) –

    Method to select genes. Options are "most var", "random expr", "some". Defaults to "most var". - "most var": select the most variable genes - "random expr": select random expressed genes - "some": select a subset of genes defined in genelist

  • max_cells (int, default: 500000 ) –

    Number of cells to use for plotting correlation. Defaults to 10000.

  • doplot (bool, default: False ) –

    Whether to generate plots of the similarity between the denoised and true expression data. Defaults to False. Only works when downsample_expr is not None and max_cells < 100.

  • predict_depth_mult (int, default: 4 ) –

    Multiplier for prediction depth. Defaults to 4. This will artificially increase the sequencing depth (or number of counts) to 4 times the original depth.

  • downsample_expr (Optional[float], default: None ) –

    Fraction of expression data to downsample. Defaults to None. This is usefull to test the ability of the model to denoise the dataset. only to use the input data as a benchmark dataset. When this option is on, the denoiser will output benchmark metrics

  • genelist (List[str], default: None ) –

    The list of genes to be used for embedding. Defaults to []: In this case, "how" needs to be "most var" or "random expr".

  • save_every (int, default: 100000 ) –

    The number of cells to save at a time. Defaults to 100_000. This is important to avoid memory issues.

  • pred_embedding (List[str], default: ['cell_type_ontology_term_id'] ) –

    The embedding type to be used as the denoising will also predict the cell embeddings.

  • additional_info (bool, default: False ) –

    Whether to print additional benchmark information during denoising. Defaults to False. only useful when downsampling is used.

  • apply_zero_pred (bool, default: False ) –

    Whether to apply zero inflation to the output value during denoising, else uses only the predicted mean. applying zero inflation might give results closer to the specific biases of sequencing technologies but less biological truthful.

  • use_knn (bool, default: True ) –

    Whether to use knn cells for denoising when the model uses metacell expression embedding. Defaults to True.

Methods:

Name Description
__call__

call calling the function

Source code in scprint2/tasks/denoise.py
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
def __init__(
    self,
    batch_size: int = 10,
    num_workers: int = 1,
    max_len: int = 5_000,
    how: str = "most var",
    max_cells: int = 500_000,
    doplot: bool = False,
    predict_depth_mult: int = 4,
    downsample_expr: Optional[float] = None,
    genelist: Optional[List[str]] = None,
    save_every: int = 100_000,
    pred_embedding: List[str] = ["cell_type_ontology_term_id"],
    additional_info: bool = False,
    apply_zero_pred: bool = False,
    use_knn: bool = True,
):
    """
    Denoiser class for denoising scRNA-seq data using a scPRINT model

    Args:
        batch_size (int, optional): Batch size for processing. Defaults to 10.
        num_workers (int, optional): Number of workers for data loading. Defaults to 1.
        max_len (int, optional): Maximum number of genes to consider. Defaults to 5000.
        how (str, optional): Method to select genes. Options are "most var", "random expr", "some". Defaults to "most var".
            - "most var": select the most variable genes
            - "random expr": select random expressed genes
            - "some": select a subset of genes defined in genelist
        max_cells (int, optional): Number of cells to use for plotting correlation. Defaults to 10000.
        doplot (bool, optional): Whether to generate plots of the similarity between the denoised and true expression data. Defaults to False.
            Only works when downsample_expr is not None and max_cells < 100.
        predict_depth_mult (int, optional): Multiplier for prediction depth. Defaults to 4.
            This will artificially increase the sequencing depth (or number of counts) to 4 times the original depth.
        downsample_expr (Optional[float], optional): Fraction of expression data to downsample. Defaults to None.
            This is usefull to test the ability of the model to denoise the dataset. only to use the input data as a benchmark dataset.
            When this option is on, the denoiser will output benchmark metrics
        genelist (List[str], optional): The list of genes to be used for embedding. Defaults to []: In this case, "how" needs to be "most var" or "random expr".
        save_every (int, optional): The number of cells to save at a time. Defaults to 100_000.
            This is important to avoid memory issues.
        pred_embedding (List[str], optional): The embedding type to be used as the denoising will also predict the cell embeddings.
        additional_info (bool, optional): Whether to print additional benchmark information during denoising. Defaults to False.
            only useful when downsampling is used.
        apply_zero_pred (bool, optional): Whether to apply zero inflation to the output value during denoising, else uses only the predicted mean.
            applying zero inflation might give results closer to the specific biases of sequencing technologies but less biological truthful.
        use_knn (bool, optional): Whether to use knn cells for denoising when the model uses metacell expression embedding. Defaults to True.
    """
    self.batch_size = batch_size
    self.num_workers = num_workers
    self.max_len = max_len
    self.max_cells = max_cells
    self.doplot = doplot
    self.predict_depth_mult = predict_depth_mult
    self.how = how
    self.downsample_expr = downsample_expr
    self.genelist = genelist
    self.save_every = save_every
    self.pred_embedding = pred_embedding
    self.additional_info = additional_info
    self.apply_zero_pred = apply_zero_pred
    self.use_knn = use_knn

__call__

call calling the function

Parameters:
  • model (Module) –

    The scPRINT model to be used for denoising.

  • adata (AnnData) –

    The annotated data matrix of shape n_obs x n_vars. Rows correspond to cells and columns to genes.

Returns:
  • dict( dict ) –

    The benchmark metrics if downsampling is used.

  • Optional[ndarray]

    Optional[np.ndarray]: The random set of cells used if max_cells < adata.shape[0].

  • AnnData( AnnData ) –

    The denoised annotated data matrix.

Source code in scprint2/tasks/denoise.py
 87
 88
 89
 90
 91
 92
 93
 94
 95
 96
 97
 98
 99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
def __call__(self, model: torch.nn.Module, adata: AnnData) -> tuple[dict, Optional[np.ndarray], AnnData]:
    """
    __call__ calling the function

    Args:
        model (torch.nn.Module): The scPRINT model to be used for denoising.
        adata (AnnData): The annotated data matrix of shape n_obs x n_vars. Rows correspond to cells and columns to genes.

    Returns:
        dict: The benchmark metrics if downsampling is used.
        Optional[np.ndarray]: The random set of cells used if max_cells < adata.shape[0].
        AnnData: The denoised annotated data matrix.
    """
    # Select random number
    random_indices = None
    if self.max_cells < adata.shape[0]:
        random_indices = np.random.randint(
            low=0, high=adata.shape[0], size=self.max_cells
        )
        adataset = SimpleAnnDataset(
            adata[random_indices],
            obs_to_output=["organism_ontology_term_id"],
            get_knn_cells=model.expr_emb_style == "metacell" and self.use_knn,
        )
    else:
        adataset = SimpleAnnDataset(
            adata,
            obs_to_output=["organism_ontology_term_id"],
            get_knn_cells=model.expr_emb_style == "metacell" and self.use_knn,
        )
    if self.how == "most var":
        sc.pp.highly_variable_genes(
            adata, flavor="seurat_v3", n_top_genes=self.max_len, span=0.99
        )
        self.genelist = adata.var.index[adata.var.highly_variable]
    else:
        self.genelist = adata.var.index
    self.genelist = [i for i in model.genes if i in self.genelist]
    print(f"working on {len(self.genelist)} accepted genes")

    col = Collator(
        organisms=model.organisms,
        valid_genes=model.genes,
        max_len=self.max_len,
        how="some" if self.how == "most var" else self.how,
        genelist=self.genelist if self.how != "random expr" else [],
        n_bins=model.n_input_bins if model.expr_emb_style == "binned" else 0,
    )
    dataloader = DataLoader(
        adataset,
        collate_fn=col,
        batch_size=self.batch_size,
        num_workers=self.num_workers,
        shuffle=False,
    )

    prevplot = model.doplot
    model.doplot = self.doplot
    model.on_predict_epoch_start()
    model.eval()
    device = model.device.type
    model.pred_log_adata = True
    stored_noisy = None
    rand = random_str()
    dtype = (
        torch.float16
        if type(model.transformer) is FlashTransformer
        else model.dtype
    )
    torch.cuda.empty_cache()
    save_expr = model.save_expr
    model.save_expr = True
    with torch.no_grad(), torch.autocast(device_type=device, dtype=dtype):
        for batch in tqdm(dataloader):
            gene_pos, expression, depth = (
                batch["genes"].to(device),
                batch["x"].to(device),
                batch["depth"].to(device),
            )
            knn_cells = (
                batch["knn_cells"].to(device)
                if model.expr_emb_style == "metacell" and self.use_knn
                else None
            )
            if self.downsample_expr is not None:
                expression = utils.downsample_profile(
                    expression, self.downsample_expr
                )
                if knn_cells is not None:
                    for i in range(knn_cells.shape[1]):
                        knn_cells[:, i] = utils.downsample_profile(
                            knn_cells[:, i], self.downsample_expr
                        )
            if stored_noisy is None:
                stored_noisy = expression.cpu().numpy()
            else:
                stored_noisy = np.concatenate(
                    [stored_noisy, expression.cpu().numpy()], axis=0
                )

            model._predict(
                gene_pos,
                expression,
                depth,
                knn_cells=(
                    batch["knn_cells"].to(device)
                    if model.expr_emb_style == "metacell" and self.use_knn
                    else None
                ),
                knn_cells_info=(
                    batch["knn_cells_info"].to(device)
                    if model.expr_emb_style == "metacell" and self.use_knn
                    else None
                ),
                do_generate=False,
                depth_mult=self.predict_depth_mult,
                pred_embedding=self.pred_embedding,
                max_size_in_mem=self.save_every,
                name="denoise_" + rand + "_",
            )
    torch.cuda.empty_cache()
    model.log_adata(name="denoise_" + rand + "_" + str(model.counter))
    try:
        mdir = (
            model.logger.save_dir if model.logger.save_dir is not None else "data"
        )
    except:
        mdir = "data"
    pred_adata = []
    for i in range(model.counter + 1):
        file = (
            mdir
            + "/step_"
            + str(model.global_step)
            + "_"
            + model.name
            + "_denoise_"
            + rand
            + "_"
            + str(i)
            + "_"
            + str(model.global_rank)
            + ".h5ad"
        )
        pred_adata.append(sc.read_h5ad(file))
        os.remove(file)
    pred_adata = concat(pred_adata)

    if model.transformer.attn_type == "hyper":
        # seq len must be a multiple of 128
        num = (1 if model.use_metacell_token else 0) + (
            (len(model.classes) + 1) if not model.cell_transformer else 0
        )
        if (stored_noisy.shape[1] + num) % 128 != 0:
            stored_noisy = stored_noisy[
                :, : ((stored_noisy.shape[1]) // 128 * 128) - num
            ]
    pred_adata.X = stored_noisy

    metrics = None
    model.doplot = prevplot
    model.save_expr = save_expr
    if self.downsample_expr is not None:
        reco = np.array(pred_adata.layers["scprint_mu"].data).reshape(
            pred_adata.shape[0], -1
        )
        # reco = reco * F.sigmoid(
        #    torch.Tensor(np.array(pred_adata.layers["scprint_pi"].data).reshape(pred_adata.shape[0], -1)) < 0.5
        # ).numpy()

        adata = (
            adata[random_indices, adata.var.index.isin(pred_adata.var.index)]
            if random_indices is not None
            else adata[:, adata.var.index.isin(pred_adata.var.index)]
        )
        true = adata[
            :,
            pred_adata.var.index[
                pred_adata.var.index.isin(adata.var.index)
            ].to_list(),
        ].X.toarray()
        if self.apply_zero_pred:
            reco = (
                reco
                * (
                    1
                    - F.sigmoid(
                        torch.Tensor(
                            np.array(pred_adata.layers["scprint_pi"].data).reshape(
                                pred_adata.shape[0], -1
                            )
                        )
                    )
                ).numpy()
            )

        corr_coef, p_value = spearmanr(
            np.vstack([reco[true != 0], stored_noisy[true != 0], true[true != 0]]).T
        )
        metrics = {
            "reco2noisy": corr_coef[0, 1],
            "reco2full": corr_coef[0, 2],
            "noisy2full": corr_coef[1, 2],
        }
        if self.additional_info:
            # Sample only 3000 elements for correlation calculation
            if reco.shape[0] > 3000:
                indices = np.random.choice(reco.shape[0], 3000, replace=False)
                reco = reco[indices]
                stored_noisy = stored_noisy[indices]
                true = true[indices]
            corr, p_value = spearmanr(
                np.vstack(
                    [
                        reco.flatten(),
                        stored_noisy.flatten(),
                        true.flatten(),
                    ]
                ).T
            )
            m = {
                "reco2full": corr[0, 2],
                "noisy2full": corr[1, 2],
            }
            print("corr with zeros: ")
            print(m)
            cell_wise = np.array(
                [
                    spearmanr(reco[i][true[i] != 0], true[i][true[i] != 0])[0]
                    for i in range(reco.shape[0])
                ]
            )
            torm = np.array(
                [
                    spearmanr(stored_noisy[i][true[i] != 0], true[i][true[i] != 0])[
                        0
                    ]
                    for i in range(reco.shape[0])
                ]
            )
            cell_wise -= torm
            cell_wise_zero = np.mean(
                [spearmanr(reco[i], true[i])[0] for i in range(reco.shape[0])]
            )
            print("cell_wise self corr (reco, noisy, true)")
            print(
                {
                    "cell_wise_w_zero": cell_wise_zero,
                    "cell_wise_to_noisy": np.mean(cell_wise),
                }
            )
            print("depth-wise plot")
            plot_cell_depth_wise_corr_improvement(cell_wise, (true > 0).sum(1))

        if self.doplot and self.max_cells < 100:
            corr_coef[p_value > 0.05] = 0
            plt.figure(figsize=(10, 5))
            plt.imshow(
                corr_coef, cmap="coolwarm", interpolation="none", vmin=-1, vmax=1
            )
            plt.colorbar()
            plt.title("Expression Correlation Coefficient")
            plt.show()
    return metrics, random_indices, pred_adata

default_benchmark

default_benchmark function used to run the default denoising benchmark of scPRINT

Parameters:
  • model (Any) –

    The scPRINT model to be used for the benchmark.

  • folder_dir (str, default: FILE_DIR + '/../../data/' ) –

    Directory containing data files.

  • dataset (str, default: FILE_DIR + '/../../data/gNNpgpo6gATjuxTE7CCp.h5ad' ) –

    Path to the dataset to use for benchmarking.

Returns:
  • dict( dict ) –

    A dictionary containing the benchmark metrics.

Source code in scprint2/tasks/denoise.py
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
def default_benchmark(
    model: Any,
    folder_dir: str = FILE_DIR + "/../../data/",
    dataset: str = FILE_DIR
    + "/../../data/gNNpgpo6gATjuxTE7CCp.h5ad",  # r4iCehg3Tw5IbCLiCIbl
) -> dict:
    """
    default_benchmark function used to run the default denoising benchmark of scPRINT

    Args:
        model (Any): The scPRINT model to be used for the benchmark.
        folder_dir (str, optional): Directory containing data files.
        dataset (str, optional): Path to the dataset to use for benchmarking.

    Returns:
        dict: A dictionary containing the benchmark metrics.
    """
    if dataset.startswith("https://"):
        adata = sc.read(
            folder_dir + dataset.split("/")[-1],
            backup_url=dataset,
        )
    else:
        adata = sc.read_h5ad(dataset)
    if dataset.split("/")[-1] == "gNNpgpo6gATjuxTE7CCp.h5ad":
        use_layer = "counts"
        is_symbol = True
    else:
        use_layer = None
        is_symbol = False
    max_len = 4000 if adata.X.sum(1).mean() < 150_000 else 8000
    preprocessor = Preprocessor(
        use_layer=use_layer,
        is_symbol=is_symbol,
        force_preprocess=True,
        skip_validate=True,
        do_postp=model.expr_emb_style == "metacell",
        drop_non_primary=False,
    )
    adata = preprocessor(adata.copy())
    if model.expr_emb_style == "metacell":
        if "X_pca" not in adata.obsm:
            sc.pp.pca(adata, n_comps=50)
        sc.pp.neighbors(adata, use_rep="X_pca")
    denoise = Denoiser(
        batch_size=40 if model.expr_emb_style != "metacell" else 20,
        max_len=max_len,
        max_cells=10_000,
        doplot=False,
        num_workers=8,
        predict_depth_mult=5,
        downsample_expr=0.7,
        pred_embedding=model.pred_embedding,
    )
    return denoise(model, adata)[0]

split_molecules

Splits molecules into two (potentially overlapping) groups. :param umis: Array of molecules to split :param data_split: Proportion of molecules to assign to the first group :param overlap_factor: Overlap correction factor, if desired :param random_state: For reproducible sampling :return: umis_X and umis_Y, representing split and ~(1 - split) counts sampled from the input array

Source code in scprint2/tasks/denoise.py
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
def split_molecules(
    umis: np.ndarray,
    data_split: float,
    overlap_factor: float = 0.0,
    random_state: np.random.RandomState = None,
) -> Tuple[np.ndarray, np.ndarray]:
    """Splits molecules into two (potentially overlapping) groups.
    :param umis: Array of molecules to split
    :param data_split: Proportion of molecules to assign to the first group
    :param overlap_factor: Overlap correction factor, if desired
    :param random_state: For reproducible sampling
    :return: umis_X and umis_Y, representing ``split`` and ``~(1 - split)`` counts
             sampled from the input array
    """
    if random_state is None:
        random_state = np.random.RandomState()

    umis_X_disjoint = random_state.binomial(umis, data_split - overlap_factor)
    umis_Y_disjoint = random_state.binomial(
        umis - umis_X_disjoint, (1 - data_split) / (1 - data_split + overlap_factor)
    )
    overlap_factor = umis - umis_X_disjoint - umis_Y_disjoint
    umis_X = umis_X_disjoint + overlap_factor
    umis_Y = umis_Y_disjoint + overlap_factor

    return umis_X, umis_Y

scprint2.tasks.gene_emb

Classes:

Name Description
GeneEmbeddingExtractor

GeneEmbeddingExtractor

Parameters:
  • genelist (list) –

    List of genes to restrict to.

  • batch_size (int, default: 64 ) –

    Batch size for the DataLoader. Defaults to 64.

  • num_workers (int, default: 8 ) –

    Number of workers for DataLoader. Defaults to 8.

  • save_every (int, default: 4000 ) –

    Save embeddings every save_every batches. Defaults to 4000.

  • average (bool, default: False ) –

    Whether to average embeddings across all cells. Defaults to False.

  • save_dir (str, default: None ) –

    Directory to save embeddings. If None, embeddings are not saved. Defaults to None.

  • use_knn (bool, default: False ) –

    Whether to use k-nearest neighbors information. Defaults to False.

Source code in scprint2/tasks/gene_emb.py
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
def __init__(
    self,
    genelist,
    batch_size: int = 64,
    num_workers: int = 8,
    save_every: int = 4_000,
    average: bool = False,
    save_dir: str = None,
    use_knn: bool = False,
):
    """
    Args:
        genelist (list): List of genes to restrict to.
        batch_size (int): Batch size for the DataLoader. Defaults to 64.
        num_workers (int): Number of workers for DataLoader. Defaults to 8.
        save_every (int): Save embeddings every `save_every` batches. Defaults to 4000.
        average (bool): Whether to average embeddings across all cells. Defaults to False.
        save_dir (str): Directory to save embeddings. If None, embeddings are not saved. Defaults to None.
        use_knn (bool): Whether to use k-nearest neighbors information. Defaults to False.

    """
    self.genelist = genelist
    self.batch_size = batch_size
    self.num_workers = num_workers
    self.save_every = save_every
    self.average = average
    self.save_dir = save_dir
    self.use_knn = use_knn

scprint2.tasks.generate

Classes:

Name Description
Generate

Generate

Embedder a class to embed and annotate cells using a model

Parameters:
  • genelist (List[str]) –

    The list of genes for which to generate expression data.

  • batch_size (int, default: 64 ) –

    The size of the batches to be used in the DataLoader. Defaults to 64.

  • embedding_to_use (List[str], default: ['all'] ) –

    The list of embeddings to be used for generating expression. Defaults to ["all"].

Methods:

Name Description
__call__

call function to call the embedding

Source code in scprint2/tasks/generate.py
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
def __init__(
    self,
    genelist: List[str],
    batch_size: int = 64,
    embedding_to_use: List[str] = [
        "all",
    ],
):
    """
    Embedder a class to embed and annotate cells using a model

    Args:
        genelist (List[str]): The list of genes for which to generate expression data.
        batch_size (int, optional): The size of the batches to be used in the DataLoader. Defaults to 64.
        embedding_to_use (List[str], optional): The list of embeddings to be used for generating expression. Defaults to ["all"].
    """
    self.batch_size = batch_size
    self.embedding_to_use = embedding_to_use
    self.genelist = genelist if genelist is not None else []

__call__

call function to call the embedding

Parameters:
  • model (Module) –

    The scPRINT model to be used for embedding and annotation.

  • adata (AnnData) –

    The annotated data matrix of shape n_obs x n_vars. Rows correspond to cells and columns to genes.

Raises:
  • ValueError

    If the model does not have a logger attribute.

  • ValueError

    If the model does not have a global_step attribute.

Returns:
  • AnnData( AnnData ) –

    The annotated data matrix with embedded cell representations.

  • List[str]

    List[str]: List of gene names used in the embedding.

  • ndarray

    np.ndarray: The predicted expression values if sample"none".

  • dict( dict ) –

    Additional metrics and information from the embedding process.

Source code in scprint2/tasks/generate.py
 46
 47
 48
 49
 50
 51
 52
 53
 54
 55
 56
 57
 58
 59
 60
 61
 62
 63
 64
 65
 66
 67
 68
 69
 70
 71
 72
 73
 74
 75
 76
 77
 78
 79
 80
 81
 82
 83
 84
 85
 86
 87
 88
 89
 90
 91
 92
 93
 94
 95
 96
 97
 98
 99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
def __call__(self, model: torch.nn.Module, adata: AnnData) -> tuple[AnnData, List[str], np.ndarray, dict]:
    """
    __call__ function to call the embedding

    Args:
        model (torch.nn.Module): The scPRINT model to be used for embedding and annotation.
        adata (AnnData): The annotated data matrix of shape n_obs x n_vars. Rows correspond to cells and columns to genes.

    Raises:
        ValueError: If the model does not have a logger attribute.
        ValueError: If the model does not have a global_step attribute.

    Returns:
        AnnData: The annotated data matrix with embedded cell representations.
        List[str]: List of gene names used in the embedding.
        np.ndarray: The predicted expression values if sample"none".
        dict: Additional metrics and information from the embedding process.
    """
    # one of "all" "sample" "none"
    model.predict_mode = "none"
    model.eval()
    model.on_predict_epoch_start()
    device = model.device.type
    dtype = (
        torch.float16
        if isinstance(model.transformer, FlashTransformer)
        else model.dtype
    )
    if self.embedding_to_use == ["all"]:
        use = [
            i
            for i in adata.obsm.keys()
            if i.startswith("scprint_emb_") and i != "scprint_emb_other"
        ]
    else:
        use = self.embedding_to_use
    res = []
    with (
        torch.no_grad(),
        torch.autocast(device_type=device, dtype=dtype),
    ):
        gene_pos = torch.tensor(
            [model.genes.index(g) for g in self.genelist],
        ).to(device=device)
        gene_pos = gene_pos.unsqueeze(0).repeat_interleave(self.batch_size, 0)
        req_depth = torch.tensor(adata.X.sum(1)).squeeze(-1).to(device=device)

        for batch in tqdm(range(adata.shape[0] // self.batch_size + 1)):
            embeddings = []
            start = batch * self.batch_size
            end = min((batch + 1) * self.batch_size, adata.shape[0])
            for emb in use:
                embeddings.append(
                    torch.tensor(adata.obsm[emb][start:end]).unsqueeze(1)
                )
            embeddings = torch.concat(embeddings, dim=1).to(device=device)

            output = model._generate(
                gene_pos=gene_pos[0 : end - start, :],
                cell_embs=embeddings,
                depth_mult=req_depth[start:end],
                req_depth=req_depth[start:end],
                metacell_token=None,
            )
            res.append(
                torch.concat(
                    [
                        output["mean"].detach().cpu().unsqueeze(0),
                        output["disp"].detach().cpu().unsqueeze(0),
                        output["zero_logits"].detach().cpu().unsqueeze(0),
                    ]
                )
                if "disp" in output
                else output["mean"].detach().cpu().unsqueeze(0)
            )
            torch.cuda.empty_cache()
    res = torch.concat(res, dim=1)
    pred_adata = AnnData(
        X=res[0, :, :].numpy(),
        obs=adata.obs.copy(),
        var=pd.DataFrame(
            index=pd.Index(self.genelist),
        ),
        layers=None
        if res.shape[1] == 1
        else {
            "disp": res[1, :, :].numpy(),
            "zero_logits": res[2, :, :].numpy(),
        },
    )
    return pred_adata

scprint2.tasks.impute

Classes:

Name Description
Imputer

Imputer

Imputer class for imputing missing values in scRNA-seq data using a scPRINT model

Parameters:
  • batch_size (int, default: 10 ) –

    Batch size for processing. Defaults to 10.

  • num_workers (int, default: 1 ) –

    Number of workers for data loading. Defaults to 1.

  • max_cells (int, default: 500000 ) –

    Number of cells to use for plotting correlation. Defaults to 10000.

  • doplot (bool, default: False ) –

    Whether to generate plots of the similarity between the denoised and true expression data. Defaults to False. Only works when downsample_expr is not None and max_cells < 100.

  • method (str, default: 'generative' ) –

    Imputation method, either 'masking' or 'generative'. Defaults to 'generative'.

  • predict_depth_mult (int, default: 4 ) –

    Multiplier for prediction depth. Defaults to 4. This will artificially increase the sequencing depth (or number of counts) to 4 times the original depth.

  • genes_to_use (Optional[List[str]], default: None ) –

    List of genes to use for imputation. Defaults to None.

  • genes_to_impute (Optional[List[str]], default: None ) –

    List of genes to impute. Defaults to None.

  • save_every (int, default: 100000 ) –

    The number of cells to save at a time. Defaults to 100_000. This is important to avoid memory issues.

  • apply_zero_pred (bool, default: True ) –

    Whether to apply zero prediction adjustment. Defaults to True. applying zero inflation might give results closer to the specific biases of sequencing technologies but less biological truthful.

  • use_knn (bool, default: True ) –

    Whether to use k-nearest neighbors information. Defaults to True.

Methods:

Name Description
__call__

call calling the function

Source code in scprint2/tasks/impute.py
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
def __init__(
    self,
    batch_size: int = 10,
    num_workers: int = 1,
    max_cells: int = 500_000,
    doplot: bool = False,
    method: str = "generative",
    predict_depth_mult: int = 4,
    genes_to_use: Optional[List[str]] = None,
    genes_to_impute: Optional[List[str]] = None,
    save_every: int = 100_000,
    apply_zero_pred: bool = True,
    use_knn: bool = True,
):
    """
    Imputer class for imputing missing values in scRNA-seq data using a scPRINT model

    Args:
        batch_size (int, optional): Batch size for processing. Defaults to 10.
        num_workers (int, optional): Number of workers for data loading. Defaults to 1.
        max_cells (int, optional): Number of cells to use for plotting correlation. Defaults to 10000.
        doplot (bool, optional): Whether to generate plots of the similarity between the denoised and true expression data. Defaults to False.
            Only works when downsample_expr is not None and max_cells < 100.
        method (str, optional): Imputation method, either 'masking' or 'generative'. Defaults to 'generative'.
        predict_depth_mult (int, optional): Multiplier for prediction depth. Defaults to 4.
            This will artificially increase the sequencing depth (or number of counts) to 4 times the original depth.
        genes_to_use (Optional[List[str]], optional): List of genes to use for imputation. Defaults to None.
        genes_to_impute (Optional[List[str]], optional): List of genes to impute. Defaults to None.
        save_every (int, optional): The number of cells to save at a time. Defaults to 100_000.
            This is important to avoid memory issues.
        apply_zero_pred (bool, optional): Whether to apply zero prediction adjustment. Defaults to True.
            applying zero inflation might give results closer to the specific biases of sequencing technologies but less biological truthful.
        use_knn (bool, optional): Whether to use k-nearest neighbors information. Defaults to True.
    """
    self.batch_size = batch_size
    self.num_workers = num_workers
    self.max_cells = max_cells
    self.doplot = doplot
    self.predict_depth_mult = predict_depth_mult
    self.save_every = save_every
    self.genes_to_use = genes_to_use
    self.genes_to_impute = genes_to_impute
    self.method = method
    self.apply_zero_pred = apply_zero_pred
    self.use_knn = use_knn

__call__

call calling the function

Parameters:
  • model (Module) –

    The scPRINT model to be used for denoising.

  • adata (AnnData) –

    The anndata of shape n_obs x n_vars. Rows correspond to cells and columns to genes.

Returns:
  • Optional[ndarray]

    Optional[np.ndarray]: The random indices of the cells used when max_cells < adata.shape[0].

  • AnnData( AnnData ) –

    The imputed anndata.

Source code in scprint2/tasks/impute.py
 72
 73
 74
 75
 76
 77
 78
 79
 80
 81
 82
 83
 84
 85
 86
 87
 88
 89
 90
 91
 92
 93
 94
 95
 96
 97
 98
 99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
def __call__(self, model: torch.nn.Module, adata: AnnData) -> tuple[Optional[np.ndarray], AnnData]:
    """
    __call__ calling the function

    Args:
        model (torch.nn.Module): The scPRINT model to be used for denoising.
        adata (AnnData): The anndata of shape n_obs x n_vars. Rows correspond to cells and columns to genes.

    Returns:
        Optional[np.ndarray]: The random indices of the cells used when max_cells < adata.shape[0].
        AnnData: The imputed anndata.
    """
    # Select random number
    random_indices = None
    if self.max_cells < adata.shape[0]:
        random_indices = np.random.randint(
            low=0, high=adata.shape[0], size=self.max_cells
        )
        adataset = SimpleAnnDataset(
            adata[random_indices],
            obs_to_output=["organism_ontology_term_id"],
            get_knn_cells=model.expr_emb_style == "metacell" and self.use_knn,
        )
    else:
        adataset = SimpleAnnDataset(
            adata,
            obs_to_output=["organism_ontology_term_id"],
            get_knn_cells=model.expr_emb_style == "metacell" and self.use_knn,
        )
    genes_to_use = set(model.genes) & set(self.genes_to_use)
    print(
        f"{100 * len(genes_to_use) / len(self.genes_to_use)}% of genes to use are available in the model"
    )
    genes_to_impute = set(model.genes) & set(self.genes_to_impute)
    print(
        f"{100 * len(genes_to_impute) / len(self.genes_to_impute)}% of genes to impute are available in the model"
    )
    tot = genes_to_use | genes_to_impute
    tot = sorted(tot)
    col = Collator(
        organisms=model.organisms,
        valid_genes=model.genes,
        how="some",
        genelist=list(genes_to_use)
        + (list(genes_to_impute) if self.method == "masking" else []),
        n_bins=model.n_input_bins if model.expr_emb_style == "binned" else 0,
    )
    dataloader = DataLoader(
        adataset,
        collate_fn=col,
        batch_size=self.batch_size,
        num_workers=self.num_workers,
        shuffle=False,
    )
    mask = None
    generate_on = None
    if self.method == "masking":
        mask = torch.Tensor(
            [i in genes_to_use for i in tot],
        ).to(device=model.device, dtype=torch.bool)
    elif self.method == "generative":
        generate_on = (
            torch.Tensor([model.genes.index(i) for i in genes_to_impute])
            .to(device=model.device)
            .long()
            .unsqueeze(0)
            .repeat(self.batch_size, 1)
        )
    else:
        raise ValueError("need to be one of generative or masking")

    prevplot = model.doplot
    model.doplot = self.doplot
    model.on_predict_epoch_start()
    model.eval()
    device = model.device.type
    rand = random_str()
    dtype = (
        torch.float16
        if type(model.transformer) is FlashTransformer
        else model.dtype
    )
    torch.cuda.empty_cache()
    with torch.no_grad(), torch.autocast(device_type=device, dtype=dtype):
        for batch in tqdm(dataloader):
            gene_pos, expression, depth = (
                batch["genes"].to(device),
                batch["x"].to(device),
                batch["depth"].to(device),
            )
            model._predict(
                gene_pos,
                expression,
                depth,
                knn_cells=(
                    batch["knn_cells"].to(device)
                    if model.expr_emb_style == "metacell" and self.use_knn
                    else None
                ),
                do_generate=self.method == "generative",
                depth_mult=self.predict_depth_mult,
                max_size_in_mem=self.save_every,
                name="impute" + rand + "_",
                mask=mask,
                generate_on=generate_on,
            )
    torch.cuda.empty_cache()
    model.log_adata(name="impute" + rand + "_" + str(model.counter))
    try:
        mdir = (
            model.logger.save_dir if model.logger.save_dir is not None else "data"
        )
    except:
        mdir = "data"
    pred_adata = []
    for i in range(model.counter + 1):
        file = (
            mdir
            + "/step_"
            + str(model.global_step)
            + "_"
            + model.name
            + "_impute"
            + rand
            + "_"
            + str(i)
            + "_"
            + str(model.global_rank)
            + ".h5ad"
        )
        pred_adata.append(sc.read_h5ad(file))
        os.remove(file)
    pred_adata = concat(pred_adata)

    model.doplot = prevplot

    # pred_adata.X = adata.X if random_indices is None else adata.X[random_indices]
    true_imp = pred_adata.X[:, pred_adata.var.index.isin(genes_to_impute)].toarray()

    if true_imp.sum() > 0:
        # we had some gt
        pred_imp = pred_adata.layers["scprint_mu"][
            :, pred_adata.var.index.isin(genes_to_impute)
        ].toarray()
        pred_known = pred_adata.layers["scprint_mu"][
            :, pred_adata.var.index.isin(genes_to_use)
        ].toarray()
        true_known = pred_adata.X[
            :, pred_adata.var.index.isin(genes_to_use)
        ].toarray()

        if self.apply_zero_pred:
            pred_imp = (
                pred_imp
                * (
                    1
                    - F.sigmoid(
                        torch.Tensor(
                            pred_adata.layers["scprint_pi"][
                                :, pred_adata.var.index.isin(genes_to_impute)
                            ].toarray()
                        )
                    )
                ).numpy()
            )
            pred_known = (
                pred_known
                * (
                    1
                    - F.sigmoid(
                        torch.Tensor(
                            pred_adata.layers["scprint_pi"][
                                :, pred_adata.var.index.isin(genes_to_use)
                            ].toarray()
                        )
                    )
                ).numpy()
            )
        cell_wise_pred = np.array(
            [
                spearmanr(pred_imp[i], true_imp[i])[0]
                for i in range(pred_imp.shape[0])
            ]
        )
        cell_wise_known = np.array(
            [
                spearmanr(pred_known[i], true_known[i])[0]
                for i in range(pred_known.shape[0])
            ]
        )
        print(
            {
                "cell_wise_known": np.mean(cell_wise_known),
                "cell_wise_pred": np.mean(cell_wise_pred),
            }
        )
        if self.doplot:
            print("depth-wise plot")
            plot_cell_depth_wise_corr_improvement(cell_wise_known, cell_wise_pred)

    return random_indices, pred_adata

scprint2.tasks.finetune

Classes:

Name Description
FinetuneBatchClass

Functions:

Name Description
mmd_loss

Compute Maximum Mean Discrepancy (MMD) loss between two 2D embedding matrices.

FinetuneBatchClass

Embedder a class to embed and annotate cells using a model

Parameters:
  • batch_key (str, default: 'batch' ) –

    The key in adata.obs that indicates the batch information. Defaults to "batch".

  • learn_batches_on (str, default: None ) –

    The key in adata.obs to learn batch embeddings on. Defaults to None. if none, will not learn the batch embeddings. the goal is e.g. when having a new species, to learn an embedding for it during finetuning and replace the "learn_batches_on" embedding in the model with it, in this case it should be "organism_ontology_term_id". batch correction might indeed be better learnt with this additional argument in some cases.

  • do_mmd_on (str, default: None ) –

    The key in adata.obs to learn batch embeddings on. Defaults to None. this embedding should have less batch information in it, after finetuning.

  • predict_keys (List[str], default: ['cell_type_ontology_term_id'] ) –

    List of keys in adata.obs to predict during fine-tuning. Defaults to ["cell_type_ontology_term_id"].

  • batch_size (int, default: 16 ) –

    The size of the batches to be used in the DataLoader. Defaults to 64.

  • num_workers (int, default: 8 ) –

    The number of worker processes to use for data loading. Defaults to 8.

  • max_len (int, default: 5000 ) –

    The maximum length of the sequences to be processed. Defaults to 5000.

  • lr (float, default: 0.0002 ) –

    The learning rate for the optimizer. Defaults to 0.0002.

  • num_epochs (int, default: 8 ) –

    The number of epochs to train the model. Defaults to 8.

  • ft_mode (str, default: 'xpressor' ) –

    The fine-tuning mode, either "xpressor" or "full". Defaults to "xpressor".

  • frac_train (float, default: 0.8 ) –

    The fraction of data to be used for training. Defaults to 0.8.

  • loss_scalers (dict, default: {} ) –

    A dictionary specifying the scaling factors for different loss components. Defaults to {}. expr, class, mmd, kl, and any of the predict_keys can be specified.

  • use_knn (bool, default: True ) –

    Whether to use k-nearest neighbors information. Defaults to True.

Methods:

Name Description
__call__

call function to call the embedding

Source code in scprint2/tasks/finetune.py
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
def __init__(
    self,
    batch_key: str = "batch",
    predict_keys: List[str] = ["cell_type_ontology_term_id"],
    max_len: int = 5000,
    learn_batches_on: Optional[str] = None,
    num_workers: int = 8,
    batch_size: int = 16,
    num_epochs: int = 8,
    do_mmd_on: Optional[str] = None,
    lr: float = 0.0002,
    ft_mode: str = "xpressor",
    frac_train: float = 0.8,
    loss_scalers: dict = {},
    use_knn: bool = True,
):
    """
    Embedder a class to embed and annotate cells using a model

    Args:
        batch_key (str, optional): The key in adata.obs that indicates the batch information. Defaults to "batch".
        learn_batches_on (str, optional): The key in adata.obs to learn batch embeddings on. Defaults to None.
            if none, will not learn the batch embeddings.
            the goal is e.g. when having a new species, to learn an embedding for it during finetuning and replace
            the "learn_batches_on" embedding in the model with it, in this case it should be "organism_ontology_term_id".
            batch correction might indeed be better learnt with this additional argument in some cases.
        do_mmd_on (str, optional):The key in adata.obs to learn batch embeddings on. Defaults to None.
            this embedding should have less batch information in it, after finetuning.
        predict_keys (List[str], optional): List of keys in adata.obs to predict during fine-tuning. Defaults to ["cell_type_ontology_term_id"].
        batch_size (int, optional): The size of the batches to be used in the DataLoader. Defaults to 64.
        num_workers (int, optional): The number of worker processes to use for data loading. Defaults to 8.
        max_len (int, optional): The maximum length of the sequences to be processed. Defaults to 5000.
        lr (float, optional): The learning rate for the optimizer. Defaults to 0.0002.
        num_epochs (int, optional): The number of epochs to train the model. Defaults to 8.
        ft_mode (str, optional): The fine-tuning mode, either "xpressor" or "full". Defaults to "xpressor".
        frac_train (float, optional): The fraction of data to be used for training. Defaults to 0.8.
        loss_scalers (dict, optional): A dictionary specifying the scaling factors for different loss components. Defaults to {}.
            expr, class, mmd, kl, and any of the predict_keys can be specified.
        use_knn (bool, optional): Whether to use k-nearest neighbors information. Defaults to True.
    """
    self.batch_size = batch_size
    self.num_workers = num_workers
    self.batch_key = batch_key
    self.learn_batches_on = learn_batches_on
    self.predict_keys = predict_keys
    self.max_len = max_len
    self.lr = lr
    self.num_epochs = num_epochs
    self.ft_mode = ft_mode
    self.frac_train = frac_train
    self.batch_emb = None
    self.batch_encoder = {}
    self.do_mmd_on = do_mmd_on
    self.loss_scalers = loss_scalers
    self.use_knn = use_knn

__call__

call function to call the embedding

Parameters:
  • model (Module) –

    The scPRINT model to be used for embedding and annotation.

  • adata (AnnData, default: None ) –

    The annotated data matrix of shape n_obs x n_vars. Rows correspond to cells and columns to genes. Defaults to None. if provided, it will be split into training and validation sets.

  • train_data (AnnData, default: None ) –

    The training data. Defaults to None. if adata is provided, this will be ignored.

  • val_data (AnnData, default: None ) –

    The validation data. Defaults to None. if adata is provided, this will be ignored.

Raises:
  • ValueError

    If the model does not have a logger attribute.

  • ValueError

    If the model does not have a global_step attribute.

Returns:
  • Module

    torch.nn.Module: the fine-tuned model

Source code in scprint2/tasks/finetune.py
 85
 86
 87
 88
 89
 90
 91
 92
 93
 94
 95
 96
 97
 98
 99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
def __call__(
    self,
    model: torch.nn.Module,
    adata: AnnData = None,
    train_data: AnnData = None,
    val_data: AnnData = None,
) -> torch.nn.Module:
    """
    __call__ function to call the embedding

    Args:
        model (torch.nn.Module): The scPRINT model to be used for embedding and annotation.
        adata (AnnData): The annotated data matrix of shape n_obs x n_vars. Rows correspond to cells and columns to genes.
            Defaults to None.
            if provided, it will be split into training and validation sets.
        train_data (AnnData, optional): The training data. Defaults to None.
            if adata is provided, this will be ignored.
        val_data (AnnData, optional): The validation data. Defaults to None.
            if adata is provided, this will be ignored.

    Raises:
        ValueError: If the model does not have a logger attribute.
        ValueError: If the model does not have a global_step attribute.

    Returns:
        torch.nn.Module: the fine-tuned model
    """
    # one of "all" "sample" "none"
    model.predict_mode = "none"
    if self.ft_mode == "xpressor":
        for val in model.parameters():
            val.requires_grad = False
            # setting all to TRUE

        for val in model.cell_transformer.parameters():
            val.requires_grad = True
        for val in model.transformer.blocks[-1].parameters():
            val.requires_grad = True
        for i in model.transformer.blocks:
            i.cross_attn.requires_grad = True
        for val in model.compressor.parameters():
            val.requires_grad = True
        for val in self.predict_keys:
            for val in model.cls_decoders[val].parameters():
                val.requires_grad = True
    elif self.ft_mode == "full":
        for val in model.parameters():
            val.requires_grad = True
    else:
        raise ValueError("ft_mode must be one of 'xpressor' or 'full'")

    # PREPARING THE DATA
    if adata is not None:
        n_train = int(self.frac_train * len(adata))
        train_idx = np.random.choice(len(adata), n_train, replace=False)
        val_idx = np.setdiff1d(np.arange(len(adata)), train_idx)

        train_data = adata[train_idx].copy()
        val_data = adata[val_idx].copy()

        print(f"Training data: {train_data.shape}")
        print(f"Validation data: {val_data.shape}")

    mencoders = {}
    for k, v in model.label_decoders.items():
        mencoders[k] = {va: ke for ke, va in v.items()}
    # this needs to remain its original name as it is expect like that by collator, otherwise need to send org_to_id as params

    for i in self.predict_keys:
        if len(set(train_data.obs[i]) - set(mencoders[i].keys())) > 0:
            print("missing labels for ", i)
            train_data.obs[i] = train_data.obs[i].apply(
                lambda x: x if x in mencoders[i] else "unknown"
            )
    if "organism_ontology_term_id" not in self.predict_keys:
        self.predict_keys.append("organism_ontology_term_id")
    # create datasets
    self.batch_encoder = {
        i: n
        for n, i in enumerate(
            train_data.obs[self.batch_key].astype("category").cat.categories
        )
    }
    mencoders[self.batch_key] = self.batch_encoder
    train_dataset = SimpleAnnDataset(
        train_data,
        obs_to_output=self.predict_keys + [self.batch_key],
        get_knn_cells=model.expr_emb_style == "metacell" and self.use_knn,
        encoder=mencoders,
    )
    if val_data is not None:
        for i in self.predict_keys:
            if i != "organism_ontology_term_id":
                if len(set(val_data.obs[i]) - set(mencoders[i].keys())) > 0:
                    val_data.obs[i] = val_data.obs[i].apply(
                        lambda x: x if x in mencoders[i] else "unknown"
                    )
        self.batch_encoder.update(
            {
                i: n + len(self.batch_encoder)
                for n, i in enumerate(
                    val_data.obs[self.batch_key].astype("category").cat.categories
                )
                if i not in self.batch_encoder
            }
        )
        mencoders[self.batch_key] = self.batch_encoder
        val_dataset = SimpleAnnDataset(
            val_data,
            obs_to_output=self.predict_keys + [self.batch_key],
            get_knn_cells=model.expr_emb_style == "metacell" and self.use_knn,
            encoder=mencoders,
        )

    # Create collator
    collator = Collator(
        organisms=model.organisms,
        valid_genes=model.genes,
        class_names=self.predict_keys + [self.batch_key],
        how="random expr",  # or "all expr" for full expression
        max_len=self.max_len,
        org_to_id=mencoders.get("organism_ontology_term_id", {}),
    )

    # Create data loaders
    train_loader = DataLoader(
        train_dataset,
        collate_fn=collator,
        batch_size=self.batch_size,  # Adjust based on GPU memory
        num_workers=self.num_workers,
        shuffle=True,
    )
    if val_data is not None:
        val_loader = DataLoader(
            val_dataset,
            collate_fn=collator,
            batch_size=self.batch_size,
            num_workers=self.num_workers,
            shuffle=False,
        )

    if self.learn_batches_on is not None:
        if val_data is not None:
            print(
                "all batch key values in val_data should also be present in train_adata!!!"
            )
        self.batch_emb = torch.nn.Embedding(
            num_embeddings=train_data.obs[self.batch_key].nunique(),
            embedding_dim=(
                model.compressor[self.learn_batches_on].fc_mu.weight.shape[0]
                if hasattr(model, "compressor")
                else model.d_model
            ),
        )

    ## PREPARING THE OPTIM
    all_params = (
        list(model.parameters())
        # + list(batch_cls.parameters())
        + (
            list(self.batch_emb.parameters())
            if self.learn_batches_on is not None
            else []
        )
    )

    # Setup optimizer
    optimizer = torch.optim.AdamW(
        all_params,
        lr=self.lr,
        weight_decay=0.01,
        betas=(0.9, 0.999),
        eps=1e-8,
    )

    # Setup scheduler
    scheduler = torch.optim.lr_scheduler.ReduceLROnPlateau(
        optimizer, mode="min", factor=0.5, patience=2
    )

    # Setup automatic mixed precision
    scaler = torch.cuda.amp.GradScaler() if torch.cuda.is_available() else None

    for k, i in model.mat_labels_hierarchy.items():
        model.mat_labels_hierarchy[k] = i.to(model.device)

    ## train
    for epoch in range(self.num_epochs):
        print(f"\nEpoch {epoch + 1}/{self.num_epochs}")
        print(f"Current learning rate: {optimizer.param_groups[0]['lr']:.2e}")

        # Training phase
        train_loss = 0.0
        train_steps = 0
        avg_expr = 0
        avg_cls = 0
        avg_mmd = 0

        pbar = tqdm(train_loader, desc="Training")
        model.train()
        for batch_idx, batch in enumerate(pbar):
            optimizer.zero_grad()
            total_loss, cls_loss, mmd, loss_expr = self.batch_corr_pass(
                batch, model
            )
            # Backward pass
            scaler.scale(total_loss).backward()
            scaler.unscale_(optimizer)
            torch.nn.utils.clip_grad_norm_(model.parameters(), max_norm=1.0)
            scaler.step(optimizer)
            scaler.update()

            train_loss += total_loss.item()
            train_steps += 1
            avg_cls += cls_loss.item()
            avg_expr += loss_expr.item()
            avg_mmd += mmd
            # Update progress bar
            pbar.set_postfix(
                {
                    "loss": f"{total_loss.item():.4f}",
                    "avg_loss": f"{train_loss / train_steps:.4f}",
                    "cls_loss": f"{cls_loss.item():.4f}",
                    "mmd_loss": f"{mmd:.4f}",
                    "expr_loss": f"{loss_expr.item():.4f}",
                }
            )

        # Validation phase
        if val_data is not None:
            model.eval()
            val_loss = 0.0
            val_steps = 0
            val_loss_expr = 0.0
            val_mmd = 0.0
            val_cls = 0.0
            val_loss_to_prt = 0.0

            with torch.no_grad():
                for batch in val_loader:  # tqdm(val_loader, desc="Validation"):
                    loss_val, cls_loss, mmd, loss_expr = self.batch_corr_pass(
                        batch, model
                    )
                    val_loss_to_prt += loss_val.item()
                    val_loss += loss_val.item()
                    val_steps += 1
                    val_loss_expr += loss_expr.item()
                    val_mmd += mmd
                    val_cls += cls_loss.item()
            try:
                avg_val_loss = val_loss_to_prt / val_steps
                avg_train_loss = train_loss / train_steps
            except ZeroDivisionError:
                print(
                    "Error: Division by zero occurred while calculating average losses."
                )
                avg_train_loss = 0
            print(
                "cls_loss: {:.4f}, mmd_loss: {:.4f}, expr_loss: {:.4f}".format(
                    val_cls / val_steps,
                    val_mmd / val_steps,
                    val_loss_expr / val_steps,
                )
            )
            print(f"Train Loss: {avg_train_loss:.4f}, Val Loss: {avg_val_loss:.4f}")

            # Store LR before scheduler step for comparison
            lr_before = optimizer.param_groups[0]["lr"]

            # Update learning rate
            scheduler.step(avg_val_loss)

            # Check if LR was reduced
            lr_after = optimizer.param_groups[0]["lr"]
            if lr_after < lr_before:
                print(
                    f"🔻 Learning rate reduced from {lr_before:.2e} to {lr_after:.2e} (factor: {lr_after / lr_before:.3f})"
                )
            else:
                print(f"✅ Learning rate unchanged: {lr_after:.2e}")

            # Early stopping check (simple implementation)
            if epoch > 3 and val_loss / val_steps > 1.3 * avg_train_loss:
                print("Early stopping due to overfitting")
                break

    print("Manual fine-tuning completed!")
    model.eval()
    return model

mmd_loss

Compute Maximum Mean Discrepancy (MMD) loss between two 2D embedding matrices.

Parameters:
  • X (Tensor) –

    Tensor of shape (n1, emb_dim) - first set of embeddings

  • Y (Tensor) –

    Tensor of shape (n2, emb_dim) - second set of embeddings

Returns:
  • Tensor

    torch.Tensor: MMD loss value (negative to encourage dissimilarity)

Source code in scprint2/tasks/finetune.py
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
def mmd_loss(X: torch.Tensor, Y: torch.Tensor) -> torch.Tensor:
    """
    Compute Maximum Mean Discrepancy (MMD) loss between two 2D embedding matrices.

    Args:
        X (torch.Tensor): Tensor of shape (n1, emb_dim) - first set of embeddings
        Y (torch.Tensor): Tensor of shape (n2, emb_dim) - second set of embeddings

    Returns:
        torch.Tensor: MMD loss value (negative to encourage dissimilarity)
    """

    def rbf_kernel(x, y, sigma):
        """Compute RBF kernel between two sets of vectors"""
        distance = torch.cdist(x, y, p=2) ** 2
        return torch.exp(-distance / (2 * sigma**2))

    def energy_kernel(x, y):
        """Compute Energy kernel between two sets of vectors"""
        distance = torch.cdist(x, y, p=2)
        return -distance

    # Use multiple kernel bandwidths for better performance
    sigmas = [0]  # [0.1, 1.0, 10.0]
    mmd_loss = 0.0

    for sigma in sigmas:
        # K(X, X) - kernel matrix within first group (n1 x n1)
        # k_xx = rbf_kernel(X, X, sigma)
        k_xx = energy_kernel(X, X)
        # K(Y, Y) - kernel matrix within second group (n2 x n2)
        # k_yy = rbf_kernel(Y, Y, sigma)
        k_yy = energy_kernel(Y, Y)
        # K(X, Y) - kernel matrix between groups (n1 x n2)
        # k_xy = rbf_kernel(X, Y, sigma)
        k_xy = energy_kernel(X, Y)

        # Unbiased MMD estimation
        n1 = X.shape[0]
        n2 = Y.shape[0]

        # Remove diagonal elements for unbiased estimation of K(X,X) and K(Y,Y)
        # For K(X,X): exclude diagonal
        if n1 > 1:
            mask_xx = 1 - torch.eye(n1, device=X.device)
            k_xx_term = (k_xx * mask_xx).sum() / (n1 * (n1 - 1))
        else:
            k_xx_term = 0.0

        # For K(Y,Y): exclude diagonal
        if n2 > 1:
            mask_yy = 1 - torch.eye(n2, device=Y.device)
            k_yy_term = (k_yy * mask_yy).sum() / (n2 * (n2 - 1))
        else:
            k_yy_term = 0.0

        # For K(X,Y): use all elements (no diagonal to exclude)
        k_xy_term = k_xy.mean()

        # MMD^2 = E[K(X,X)] + E[K(Y,Y)] - 2*E[K(X,Y)]
        mmd_squared = k_xx_term + k_yy_term - 2 * k_xy_term
        mmd_loss += mmd_squared

    # Return negative MMD to encourage dissimilarity (higher MMD = more different)
    return mmd_loss / len(sigmas)