@Singleton
class TestSingleton:
pass
= TestSingleton()
a = TestSingleton()
b test_eq(a, b)
utils
BLURR
library.
Utility classes
Singleton
Singleton (cls)
Initialize self. See help(type(self)) for accurate signature.
Singleton
functions as python decorator. Use this above any class to turn that class into a singleton (see here for more info on the singleton pattern).
Utility methods
str_to_type
str_to_type (typename:str)
Converts a type represented as a string to the actual class
Type | Details | |
---|---|---|
typename | str | |
Returns | type | The name of a type as a string # Returns the actual type |
How to use:
print(str_to_type("test_eq"))
print(str_to_type("TestSingleton"))
<function test_eq>
<__main__.Singleton object>
print_versions
print_versions (packages:str|list[str])
Prints the name and version of one or more packages in your environment
Type | Details | |
---|---|---|
packages | str | list[str] | A string of space delimited package names or a list of package names |
How to use:
"torch transformers fastai")
print_versions(print("---")
"torch", "transformers", "fastai"]) print_versions([
torch: 1.9.0+cu102
transformers: 4.21.2
fastai: 2.7.9
---
torch: 1.9.0+cu102
transformers: 4.21.2
fastai: 2.7.9
set_seed
set_seed (seed_value:int=42)
This needs to be ran before creating your DataLoaders, before creating your Learner, and before each call to your fit function to help ensure reproducibility.
reset_memory
reset_memory (learn:fastai.learner.Learner=None)
A function which clears gpu memory.
Type | Default | Details | |
---|---|---|---|
learn | Learner | None | The fastai learner to delete |
Loss functions
PreCalculatedMSELoss
PreCalculatedMSELoss (*args, axis=-1, floatify=True, **kwargs)
If you want to let your Hugging Face model calculate the loss for you, make sure you include the labels
argument in your inputs and use PreCalculatedLoss
as your loss function. Even though we don’t really need a loss function per se, we have to provide a custom loss class/function for fastai to function properly (e.g. one with a decodes
and activation
methods). Why? Because these methods will get called in methods like show_results
to get the actual predictions.
Note: The Hugging Face models will always calculate the loss for you if you pass a labels
dictionary along with your other inputs (so only include it if that is what you intend to happen)
PreCalculatedBCELoss
PreCalculatedBCELoss (*args, axis:int=-1, floatify:bool=True, thresh:float=0.5, weight=None, reduction='mean', pos_weight=None, flatten:bool=True, is_2d:bool=True)
If you want to let your Hugging Face model calculate the loss for you, make sure you include the labels
argument in your inputs and use PreCalculatedLoss
as your loss function. Even though we don’t really need a loss function per se, we have to provide a custom loss class/function for fastai to function properly (e.g. one with a decodes
and activation
methods). Why? Because these methods will get called in methods like show_results
to get the actual predictions.
Note: The Hugging Face models will always calculate the loss for you if you pass a labels
dictionary along with your other inputs (so only include it if that is what you intend to happen)
PreCalculatedCrossEntropyLoss
PreCalculatedCrossEntropyLoss (*args, axis:int=-1, weight=None, ignore_index=-100, reduction='mean', flatten:bool=True, floatify:bool=False, is_2d:bool=True)
If you want to let your Hugging Face model calculate the loss for you, make sure you include the labels
argument in your inputs and use PreCalculatedLoss
as your loss function. Even though we don’t really need a loss function per se, we have to provide a custom loss class/function for fastai to function properly (e.g. one with a decodes
and activation
methods). Why? Because these methods will get called in methods like show_results
to get the actual predictions.
Note: The Hugging Face models will always calculate the loss for you if you pass a labels
dictionary along with your other inputs (so only include it if that is what you intend to happen)
PreCalculatedLoss
PreCalculatedLoss (loss_cls, *args, axis:int=-1, flatten:bool=True, floatify:bool=False, is_2d:bool=True, **kwargs)
If you want to let your Hugging Face model calculate the loss for you, make sure you include the labels
argument in your inputs and use PreCalculatedLoss
as your loss function. Even though we don’t really need a loss function per se, we have to provide a custom loss class/function for fastai to function properly (e.g. one with a decodes
and activation
methods). Why? Because these methods will get called in methods like show_results
to get the actual predictions.
Note: The Hugging Face models will always calculate the loss for you if you pass a labels
dictionary along with your other inputs (so only include it if that is what you intend to happen)
Type | Default | Details | |
---|---|---|---|
loss_cls | Uninitialized PyTorch-compatible loss | ||
args | |||
axis | int | -1 | |
flatten | bool | True | |
floatify | bool | False | |
is_2d | bool | True | |
kwargs |
MultiTargetLoss
MultiTargetLoss (loss_classes:list[Callable]=[<class 'fastai.losses.CrossEntropyLossFlat'>, <class 'fastai.losses.CrossEntropyLossFlat'>], loss_classes_kwargs:list[dict]=[{}, {}], weights:list[float]|list[int]=[1, 1], reduction:str='mean')
Provides the ability to apply different loss functions to multi-modal targets/predictions.
This new loss function can be used in many other multi-modal architectures, with any mix of loss functions. For example, this can be ammended to include the is_impossible
task, as well as the start/end token tasks in the SQUAD v2 dataset (or in any extractive question/answering task)
Type | Default | Details | |
---|---|---|---|
loss_classes | list[Callable] | [<class ‘fastai.losses.CrossEntropyLossFlat’>, <class ‘fastai.losses.CrossEntropyLossFlat’>] | The loss function for each target |
loss_classes_kwargs | list[dict] | [{}, {}] | Any kwargs you want to pass to the loss functions above |
weights | list[float] | list[int] | [1, 1] | The weights you want to apply to each loss (default: [1,1]) |
reduction | str | mean | The reduction parameter of the lass function (default: ‘mean’) |