This module contains core custom models, loss functions, and a default layer group splitter for use in applying discriminiative learning rates to your Hugging Face models trained via fastai
 
What we're running with at the time this documentation was generated:
torch: 1.9.0+cu102
fastai: 2.5.2
transformers: 4.10.0

Base splitter, model wrapper, and model callback

hf_splitter[source]

hf_splitter(m:Module)

Splits the Hugging Face model based on various model architecture conventions

Parameters:

  • m : <class 'fastai.torch_core.Module'>

class HF_BaseModelWrapper[source]

HF_BaseModelWrapper(hf_model:PreTrainedModel, output_hidden_states:bool=False, output_attentions:bool=False, hf_model_kwargs={}) :: Module

Same as nn.Module, but no need for subclasses to call super().__init__

Parameters:

  • hf_model : <class 'transformers.modeling_utils.PreTrainedModel'>

    Your Hugging Face model

  • output_hidden_states : <class 'bool'>, optional

    If True, hidden_states will be returned and accessed from Learner

  • output_attentions : <class 'bool'>, optional

    If True, attentions will be returned and accessed from Learner

  • hf_model_kwargs : <class 'dict'>, optional

    Any additional keyword arguments you want passed into your models forward method

Note that HF_BaseModelWrapper includes some nifty code for just passing in the things your model needs, as not all transformer architectures require/use the same information.

class HF_PreCalculatedLoss[source]

HF_PreCalculatedLoss()

If you want to let your Hugging Face model calculate the loss for you, make sure you include the labels argument in your inputs and use HF_PreCalculatedLoss as your loss function. Even though we don't really need a loss function per se, we have to provide a custom loss class/function for fastai to function properly (e.g. one with a decodes and activation methods). Why? Because these methods will get called in methods like show_results to get the actual predictions.

class HF_BaseModelCallback[source]

HF_BaseModelCallback(after_create=None, before_fit=None, before_epoch=None, before_train=None, before_batch=None, after_pred=None, after_loss=None, before_backward=None, before_step=None, after_cancel_step=None, after_step=None, after_cancel_batch=None, after_batch=None, after_cancel_train=None, after_train=None, before_validate=None, after_cancel_validate=None, after_validate=None, after_cancel_epoch=None, after_epoch=None, after_cancel_fit=None, after_fit=None) :: Callback

Basic class handling tweaks of the training loop by changing a Learner in various events

Parameters:

  • after_create : <class 'NoneType'>, optional

  • before_fit : <class 'NoneType'>, optional

  • before_epoch : <class 'NoneType'>, optional

  • before_train : <class 'NoneType'>, optional

  • before_batch : <class 'NoneType'>, optional

  • after_pred : <class 'NoneType'>, optional

  • after_loss : <class 'NoneType'>, optional

  • before_backward : <class 'NoneType'>, optional

  • before_step : <class 'NoneType'>, optional

  • after_cancel_step : <class 'NoneType'>, optional

  • after_step : <class 'NoneType'>, optional

  • after_cancel_batch : <class 'NoneType'>, optional

  • after_batch : <class 'NoneType'>, optional

  • after_cancel_train : <class 'NoneType'>, optional

  • after_train : <class 'NoneType'>, optional

  • before_validate : <class 'NoneType'>, optional

  • after_cancel_validate : <class 'NoneType'>, optional

  • after_validate : <class 'NoneType'>, optional

  • after_cancel_epoch : <class 'NoneType'>, optional

  • after_epoch : <class 'NoneType'>, optional

  • after_cancel_fit : <class 'NoneType'>, optional

  • after_fit : <class 'NoneType'>, optional

We use a Callback for handling what is returned from the Hugging Face model. The return type is (ModelOutput)[https://huggingface.co/transformers/main_classes/output.html#transformers.file_utils.ModelOutput] which makes it easy to return all the goodies we asked for.

Note that your Learner's loss will be set for you only if the Hugging Face model returns one and you are using the HF_PreCalculatedLoss loss function.

Also note that anything else you asked the model to return (for example, last hidden state, etc..) will be available for you via the blurr_model_outputs property attached to your Learner. For example, assuming you are using BERT for a classification task ... if you have told your HF_BaseModelWrapper instance to return attentions, you'd be able to access them via learn.blurr_model_outputs['attentions'].

Sequence classification

Below demonstrates how to setup your blurr pipeline for a sequence classification task (e.g., a model that requires a single text input) using the mid, high, and low-level API

Using the mid-level API

path = untar_data(URLs.IMDB_SAMPLE)
imdb_df = pd.read_csv(path/'texts.csv')
imdb_df.head()
label text is_valid
0 negative Un-bleeping-believable! Meg Ryan doesn't even look her usual pert lovable self in this, which normally makes me forgive her shallow ticky acting schtick. Hard to believe she was the producer on this dog. Plus Kevin Kline: what kind of suicide trip has his career been on? Whoosh... Banzai!!! Finally this was directed by the guy who did Big Chill? Must be a replay of Jonestown - hollywood style. Wooofff! False
1 positive This is a extremely well-made film. The acting, script and camera-work are all first-rate. The music is good, too, though it is mostly early in the film, when things are still relatively cheery. There are no really superstars in the cast, though several faces will be familiar. The entire cast does an excellent job with the script.<br /><br />But it is hard to watch, because there is no good end to a situation like the one presented. It is now fashionable to blame the British for setting Hindus and Muslims against each other, and then cruelly separating them into two countries. There is som... False
2 negative Every once in a long while a movie will come along that will be so awful that I feel compelled to warn people. If I labor all my days and I can save but one soul from watching this movie, how great will be my joy.<br /><br />Where to begin my discussion of pain. For starters, there was a musical montage every five minutes. There was no character development. Every character was a stereotype. We had swearing guy, fat guy who eats donuts, goofy foreign guy, etc. The script felt as if it were being written as the movie was being shot. The production value was so incredibly low that it felt li... False
3 positive Name just says it all. I watched this movie with my dad when it came out and having served in Korea he had great admiration for the man. The disappointing thing about this film is that it only concentrate on a short period of the man's life - interestingly enough the man's entire life would have made such an epic bio-pic that it is staggering to imagine the cost for production.<br /><br />Some posters elude to the flawed characteristics about the man, which are cheap shots. The theme of the movie "Duty, Honor, Country" are not just mere words blathered from the lips of a high-brassed offic... False
4 negative This movie succeeds at being one of the most unique movies you've seen. However this comes from the fact that you can't make heads or tails of this mess. It almost seems as a series of challenges set up to determine whether or not you are willing to walk out of the movie and give up the money you just paid. If you don't want to feel slighted you'll sit through this horrible film and develop a real sense of pity for the actors involved, they've all seen better days, but then you realize they actually got paid quite a bit of money to do this and you'll lose pity for them just like you've alr... False
model_cls = AutoModelForSequenceClassification

pretrained_model_name = "distilroberta-base" # "distilbert-base-uncased" "bert-base-uncased"
hf_arch, hf_config, hf_tokenizer, hf_model = BLURR.get_hf_objects(pretrained_model_name, model_cls=model_cls)
blocks = (HF_TextBlock(hf_arch, hf_config, hf_tokenizer, hf_model), CategoryBlock)
dblock = DataBlock(blocks=blocks, get_x=ColReader('text'), get_y=ColReader('label'), splitter=ColSplitter())
dls = dblock.dataloaders(imdb_df, bs=4)
dls.show_batch(dataloaders=dls, max_n=2, trunc_at=500)
text target
0 Raising Victor Vargas: A Review<br /><br />You know, Raising Victor Vargas is like sticking your hands into a big, steaming bowl of oatmeal. It's warm and gooey, but you're not sure if it feels right. Try as I might, no matter how warm and gooey Raising Victor Vargas became I was always aware that something didn't quite feel right. Victor Vargas suffers from a certain overconfidence on the director's part. Apparently, the director thought that the ethnic backdrop of a Latino family on the lower negative
1 This is the last of four swashbucklers from France I've scheduled for viewing during this Christmas season: the others (in order of viewing) were the uninspired THE BLACK TULIP (1964; from the same director as this one but not nearly as good), the surprisingly effective LADY Oscar (1979; which had originated as a Japanese manga!) and the splendid CARTOUCHE (1962). Actually, I had watched this one not too long ago on late-night Italian TV and recall not being especially bowled over by it, so tha positive

Training

We'll also add in custom summary methods for blurr learners/models that work with dictionary inputs

model = HF_BaseModelWrapper(hf_model)

learn = Learner(dls, 
                model,
                opt_func=partial(OptimWrapper, opt=torch.optim.Adam),
                loss_func=CrossEntropyLossFlat(),
                metrics=[accuracy],
                cbs=[HF_BaseModelCallback],
                splitter=hf_splitter)

learn.freeze()

.to_fp16() requires a GPU so had to remove for tests to run on github. Let's check that we can get predictions.

learn.summary()
print(len(learn.opt.param_groups))
3
learn.lr_find(suggest_funcs=[minimum, steep, valley, slide])
/home/wgilliam/miniconda3/envs/blurr/lib/python3.9/site-packages/fastai/callback/schedule.py:270: UserWarning: color is redundantly defined by the 'color' keyword argument and the fmt string "ro" (-> color='r'). The keyword argument will take precedence.
  ax.plot(val, idx, 'ro', label=nm, c=color)
SuggestedLRs(minimum=8.31763736641733e-07, steep=0.02290867641568184, valley=0.001737800776027143, slide=0.0002754228771664202)
learn.fit_one_cycle(1, lr_max=1e-3)
epoch train_loss valid_loss accuracy time
0 0.339713 0.319738 0.895000 00:11

Showing results

And here we create a @typedispatched implementation of Learner.show_results.

learn.show_results(learner=learn, max_n=2, trunc_at=500)
text target prediction
0 The trouble with the book, "Memoirs of a Geisha" is that it had Japanese surfaces but underneath the surfaces it was all an American man's way of thinking. Reading the book is like watching a magnificent ballet with great music, sets, and costumes yet performed by barnyard animals dressed in those costumes—so far from Japanese ways of thinking were the characters.<br /><br />The movie isn't about Japan or real geisha. It is a story about a few American men's mistaken ideas about Japan and geish negative negative
1 <br /><br />I'm sure things didn't exactly go the same way in the real life of Homer Hickam as they did in the film adaptation of his book, Rocket Boys, but the movie "October Sky" (an anagram of the book's title) is good enough to stand alone. I have not read Hickam's memoirs, but I am still able to enjoy and understand their film adaptation. The film, directed by Joe Johnston and written by Lewis Colick, records the story of teenager Homer Hickam (Jake Gyllenhaal), beginning in October of 195 positive positive

Learner.blurr_predict[source]

Learner.blurr_predict(items, rm_type_tfms=None)

Parameters:

  • items : <class 'inspect._empty'>

  • rm_type_tfms : <class 'NoneType'>, optional

We need to replace fastai's Learner.predict method with the one above which is able to work with inputs that are represented by multiple tensors included in a dictionary.

learn.blurr_predict('I really liked the movie')
[(('positive',), (#1) [tensor(1)], (#1) [tensor([0.1276, 0.8724])])]
learn.blurr_predict(['I really liked the movie', 'I really hated the movie'])
[(('positive',), (#1) [tensor(1)], (#1) [tensor([0.1276, 0.8724])]),
 (('negative',), (#1) [tensor(0)], (#1) [tensor([0.6717, 0.3283])])]

Though not useful in sequence classification, we will also add a blurr_generate method to Learner that uses Hugging Face's PreTrainedModel.generate for text generation tasks.

For the full list of arguments you can pass in see here. You can also check out their "How To Generate" notebook for more information about how it all works.

Learner.blurr_generate[source]

Learner.blurr_generate(inp, **kwargs)

Uses the built-in generate method to generate the text (see here for a list of arguments you can pass in)

Parameters:

  • inp : <class 'inspect._empty'>

  • kwargs : <class 'inspect._empty'>

learn.unfreeze()
learn.fit_one_cycle(2, lr_max=slice(1e-7, 1e-4))
epoch train_loss valid_loss accuracy time
0 0.302813 0.284582 0.900000 00:17
1 0.206623 0.285117 0.910000 00:17
learn.recorder.plot_loss()
learn.show_results(learner=learn, max_n=2, trunc_at=500)
text target prediction
0 The trouble with the book, "Memoirs of a Geisha" is that it had Japanese surfaces but underneath the surfaces it was all an American man's way of thinking. Reading the book is like watching a magnificent ballet with great music, sets, and costumes yet performed by barnyard animals dressed in those costumes—so far from Japanese ways of thinking were the characters.<br /><br />The movie isn't about Japan or real geisha. It is a story about a few American men's mistaken ideas about Japan and geish negative negative
1 <br /><br />I'm sure things didn't exactly go the same way in the real life of Homer Hickam as they did in the film adaptation of his book, Rocket Boys, but the movie "October Sky" (an anagram of the book's title) is good enough to stand alone. I have not read Hickam's memoirs, but I am still able to enjoy and understand their film adaptation. The film, directed by Joe Johnston and written by Lewis Colick, records the story of teenager Homer Hickam (Jake Gyllenhaal), beginning in October of 195 positive positive
learn.blurr_predict("This was a really good movie")
[(('positive',), (#1) [tensor(1)], (#1) [tensor([0.1670, 0.8330])])]
learn.blurr_predict("Acting was so bad it was almost funny.")
[(('negative',), (#1) [tensor(0)], (#1) [tensor([0.8782, 0.1218])])]

Inference

export_fname = 'seq_class_learn_export'

Using fast.ai Learner.export and load_learner

learn.export(fname=f'{export_fname}.pkl')
inf_learn = load_learner(fname=f'{export_fname}.pkl')
inf_learn.blurr_predict("This movie should not be seen by anyone!!!!")
[(('negative',), (#1) [tensor(0)], (#1) [tensor([0.9069, 0.0931])])]

Using the high-level API

Blearner

Instead of constructing our low-level Learner, we can use the Blearner class which provides sensible defaults for training

model_cls = AutoModelForSequenceClassification

pretrained_model_name = "distilroberta-base" # "distilbert-base-uncased" "bert-base-uncased"
hf_arch, hf_config, hf_tokenizer, hf_model = BLURR.get_hf_objects(pretrained_model_name, model_cls=model_cls)

dls = dblock.dataloaders(imdb_df, bs=4)

class Blearner[source]

Blearner(dls:DataLoaders, hf_model:PreTrainedModel, base_model_cb:HF_BaseModelCallback=HF_BaseModelCallback, loss_func=None, opt_func=Adam, lr=0.001, splitter=trainable_params, cbs=None, metrics=None, path=None, model_dir='models', wd=None, wd_bn_bias=False, train_bn=True, moms=(0.95, 0.85, 0.95)) :: Learner

Group together a model, some dls and a loss_func to handle training

Parameters:

  • dls : <class 'fastai.data.core.DataLoaders'>

    Your fast.ai DataLoaders

  • hf_model : <class 'transformers.modeling_utils.PreTrainedModel'>

    Your pretrained Hugging Face transformer

  • base_model_cb : <class 'blurr.modeling.core.HF_BaseModelCallback'>, optional

    Your `HF_BaseModelCallback`

  • kwargs : <class 'inspect._empty'>
learn = Blearner(dls, hf_model, metrics=[accuracy])
learn.fit_one_cycle(1, lr_max=1e-3)
epoch train_loss valid_loss accuracy time
0 0.380025 0.304496 0.885000 00:11
learn.show_results(learner=learn, max_n=2, trunc_at=500)
text target prediction
0 The trouble with the book, "Memoirs of a Geisha" is that it had Japanese surfaces but underneath the surfaces it was all an American man's way of thinking. Reading the book is like watching a magnificent ballet with great music, sets, and costumes yet performed by barnyard animals dressed in those costumes—so far from Japanese ways of thinking were the characters.<br /><br />The movie isn't about Japan or real geisha. It is a story about a few American men's mistaken ideas about Japan and geish negative negative
1 <br /><br />I'm sure things didn't exactly go the same way in the real life of Homer Hickam as they did in the film adaptation of his book, Rocket Boys, but the movie "October Sky" (an anagram of the book's title) is good enough to stand alone. I have not read Hickam's memoirs, but I am still able to enjoy and understand their film adaptation. The film, directed by Joe Johnston and written by Lewis Colick, records the story of teenager Homer Hickam (Jake Gyllenhaal), beginning in October of 195 positive positive
learn.blurr_predict("This was a really good movie")
[(('positive',), (#1) [tensor(1)], (#1) [tensor([0.1946, 0.8054])])]
learn.export(fname=f'{export_fname}.pkl')
inf_learn = load_learner(fname=f'{export_fname}.pkl')
inf_learn.blurr_predict("This movie should not be seen by anyone!!!!")
[(('negative',), (#1) [tensor(0)], (#1) [tensor([0.7880, 0.2120])])]

BlearnerForSequenceClassification

We also introduce a task specific Blearner that get you your DataBlock, DataLoaders, and BLearner in one line of code!

class BlearnerForSequenceClassification[source]

BlearnerForSequenceClassification(dls:DataLoaders, hf_model:PreTrainedModel, base_model_cb:HF_BaseModelCallback=HF_BaseModelCallback, loss_func=None, opt_func=Adam, lr=0.001, splitter=trainable_params, cbs=None, metrics=None, path=None, model_dir='models', wd=None, wd_bn_bias=False, train_bn=True, moms=(0.95, 0.85, 0.95)) :: Blearner

Group together a model, some dls and a loss_func to handle training

Parameters:

  • dls : <class 'fastai.data.core.DataLoaders'>

  • hf_model : <class 'transformers.modeling_utils.PreTrainedModel'>

  • kwargs : <class 'inspect._empty'>

learn = BlearnerForSequenceClassification.from_dataframe(imdb_df, 'distilroberta-base', 
                                                         text_attr='text', label_attr='label', 
                                                         dl_kwargs={'bs':4})
learn.fit_one_cycle(1, lr_max=1e-3)
epoch train_loss valid_loss f1_score accuracy time
0 0.375952 0.293004 0.880435 0.890000 00:12
learn.show_results(learner=learn, max_n=2, trunc_at=500)
text target prediction
0 The trouble with the book, "Memoirs of a Geisha" is that it had Japanese surfaces but underneath the surfaces it was all an American man's way of thinking. Reading the book is like watching a magnificent ballet with great music, sets, and costumes yet performed by barnyard animals dressed in those costumes—so far from Japanese ways of thinking were the characters.<br /><br />The movie isn't about Japan or real geisha. It is a story about a few American men's mistaken ideas about Japan and geish negative negative
1 <br /><br />I'm sure things didn't exactly go the same way in the real life of Homer Hickam as they did in the film adaptation of his book, Rocket Boys, but the movie "October Sky" (an anagram of the book's title) is good enough to stand alone. I have not read Hickam's memoirs, but I am still able to enjoy and understand their film adaptation. The film, directed by Joe Johnston and written by Lewis Colick, records the story of teenager Homer Hickam (Jake Gyllenhaal), beginning in October of 195 positive positive
learn.blurr_predict("This was a really good movie")
[(('positive',), (#1) [tensor(1)], (#1) [tensor([0.3496, 0.6504])])]
learn.export(fname=f'{export_fname}.pkl')
inf_learn = load_learner(fname=f'{export_fname}.pkl')
inf_learn.blurr_predict("This movie should not be seen by anyone!!!!")
[(('negative',), (#1) [tensor(0)], (#1) [tensor([0.8531, 0.1469])])]

Using the low-level API

Thanks to the BlurrDataLoader, there isn't really anything you have to do to use plain ol' PyTorch or fast.ai Datasets and DataLoaders with Blurr. Let's take a look at fine-tuning a model against Glue's MRPC dataset ...

from datasets import load_dataset
from blurr.data.core import preproc_hf_dataset

raw_datasets = load_dataset("glue", "mrpc")
Reusing dataset glue (/home/wgilliam/.cache/huggingface/datasets/glue/mrpc/1.0.0/dacbe3125aa31d7f70367a07a8a9e72a5a0bfeb5fc42e75c9db75b96da6053ad)
def tokenize_function(example):
    return hf_tokenizer(example["sentence1"], example["sentence2"], truncation=True)

tokenized_datasets = raw_datasets.map(tokenize_function, batched=True)
trn_dl = BlurrDataLoader(tokenized_datasets["train"], 
                         hf_arch=hf_arch, hf_config=hf_config, hf_tokenizer=hf_tokenizer, hf_model=hf_model,
                         preproccesing_func=preproc_hf_dataset, shuffle=True, batch_size=8)

val_dl = BlurrDataLoader(tokenized_datasets["validation"],
                         hf_arch=hf_arch, hf_config=hf_config, hf_tokenizer=hf_tokenizer, hf_model=hf_model,
                         preproccesing_func=preproc_hf_dataset, batch_size=16)

dls = DataLoaders(trn_dl, val_dl)

And with our fast.ai DataLoaders in hand, we can train our model's using the high or low-level Blurr API. The BlurrDataLoader class sets up everything so that we can use our objects just as if we built our DataLoaders with the mid-level DataBlock API. This means we get back methods like one_batch, show_batch, show_results, etc... with all levels of Blurr's API.

learn = BlearnerForSequenceClassification(dls, hf_model, loss_func=CrossEntropyLossFlat())
learn.lr_find()
/home/wgilliam/miniconda3/envs/blurr/lib/python3.9/site-packages/fastai/callback/schedule.py:270: UserWarning: color is redundantly defined by the 'color' keyword argument and the fmt string "ro" (-> color='r'). The keyword argument will take precedence.
  ax.plot(val, idx, 'ro', label=nm, c=color)
SuggestedLRs(valley=0.0010000000474974513)
learn.fit_one_cycle(1, lr_max=1e-3)
epoch train_loss valid_loss time
0 0.498150 0.497868 00:14
learn.show_results(learner=learn, max_n=2, trunc_at=500)
text target prediction
0 Amazon.com shipped out more than a million copies of the new book, making Saturday the largest distribution day of a single item in e-commerce history. Amazon.com shipped more than a million copies by Saturday afternoon, making Saturday the largest distribution day of a single item in e-commerce history. 1 1
1 Singapore Prime Minister Goh Chok Tong says China plays an important role in the integration of Asia, including managing the stresses and strains both within and between countries. HAINAN PROVINCE, China : Singapore Prime Minister Goh Chok Tong said China plays an important role in the integration of Asia. 0 1

Tests

The tests below to ensure the core training code above works for all pretrained sequence classification models available in Hugging Face. These tests are excluded from the CI workflow because of how long they would take to run and the amount of data that would be required to download.

Note: Feel free to modify the code below to test whatever pretrained classification models you are working with ... and if any of your pretrained sequence classification models fail, please submit a github issue (or a PR if you'd like to fix it yourself)

arch tokenizer model result error
0 albert AlbertTokenizerFast AlbertForSequenceClassification PASSED
1 bart BartTokenizerFast BartForSequenceClassification PASSED
2 bert BertTokenizerFast BertForSequenceClassification PASSED
3 big_bird BigBirdTokenizerFast BigBirdForSequenceClassification PASSED
4 ctrl CTRLTokenizer CTRLForSequenceClassification PASSED
5 camembert CamembertTokenizerFast CamembertForSequenceClassification PASSED
6 convbert ConvBertTokenizerFast ConvBertForSequenceClassification PASSED
7 deberta DebertaTokenizerFast DebertaForSequenceClassification PASSED
8 deberta_v2 DebertaV2Tokenizer DebertaV2ForSequenceClassification PASSED
9 distilbert DistilBertTokenizerFast DistilBertForSequenceClassification PASSED
10 electra ElectraTokenizerFast ElectraForSequenceClassification PASSED
11 flaubert FlaubertTokenizer FlaubertForSequenceClassification PASSED
12 funnel FunnelTokenizerFast FunnelForSequenceClassification PASSED
13 gpt2 GPT2TokenizerFast GPT2ForSequenceClassification PASSED
14 ibert RobertaTokenizer IBertForSequenceClassification PASSED
15 led LEDTokenizerFast LEDForSequenceClassification FAILED You have to specify either decoder_input_ids or decoder_inputs_embeds
16 layoutlm LayoutLMTokenizerFast LayoutLMForSequenceClassification PASSED
17 longformer LongformerTokenizerFast LongformerForSequenceClassification PASSED
18 mbart MBartTokenizerFast MBartForSequenceClassification PASSED
19 mpnet MPNetTokenizerFast MPNetForSequenceClassification PASSED
20 mobilebert MobileBertTokenizerFast MobileBertForSequenceClassification PASSED
21 openai OpenAIGPTTokenizerFast OpenAIGPTForSequenceClassification PASSED
22 roberta RobertaTokenizerFast RobertaForSequenceClassification PASSED
23 squeezebert SqueezeBertTokenizerFast SqueezeBertForSequenceClassification PASSED
24 transfo_xl TransfoXLTokenizer TransfoXLForSequenceClassification PASSED
25 xlm XLMTokenizer XLMForSequenceClassification PASSED

Summary

This module includes the fundamental building blocks for training using Blurr