blurr
  • Getting Started
  • Resources
    • fastai x Hugging Face Study Group
    • Hugging Face Course
    • fast.ai (docs)
    • transformers (docs)
  • Help
    • Report an Issue

utils

  • Overview
    • Getting Started
    • callbacks
    • utils
  • Text
    • Sequence Classification
      • Data
      • Modeling
    • Token Classification
      • Data
      • Modeling
    • Question & Answering
      • Data
      • Modeling
    • Language Modeling
      • Data
      • Modeling
    • Seq2Seq: Core
      • Data
      • Modeling
    • Seq2Seq: Summarization
      • Data
      • Modeling
    • Seq2Seq: Translation
      • Data
      • Modeling
    • callbacks
    • utils
  • Examples
    • Using the high-level Blurr API
    • GLUE classification tasks
    • Using the Low-level fastai API
    • Multi-label classification
    • Causal Language Modeling with GPT-2

On this page

  • Utility classes
    • Singleton
  • Utility methods
    • str_to_type
    • print_versions
    • set_seed
    • reset_memory
  • Loss functions
    • PreCalculatedMSELoss
    • PreCalculatedBCELoss
    • PreCalculatedCrossEntropyLoss
    • PreCalculatedLoss
    • MultiTargetLoss

Report an issue

utils

Various utility classes and functions used by the BLURR library.

Utility classes


source

Singleton

 Singleton (cls)

Initialize self. See help(type(self)) for accurate signature.

Singleton functions as python decorator. Use this above any class to turn that class into a singleton (see here for more info on the singleton pattern).

@Singleton
class TestSingleton:
    pass


a = TestSingleton()
b = TestSingleton()
test_eq(a, b)

Utility methods


source

str_to_type

 str_to_type (typename:str)

Converts a type represented as a string to the actual class

Type Details
typename str
Returns type The name of a type as a string # Returns the actual type

How to use:

print(str_to_type("test_eq"))
print(str_to_type("TestSingleton"))
<function test_eq>
<__main__.Singleton object>

source

print_versions

 print_versions (packages:str|list[str])

Prints the name and version of one or more packages in your environment

Type Details
packages str | list[str] A string of space delimited package names or a list of package names

How to use:

print_versions("torch transformers fastai")
print("---")
print_versions(["torch", "transformers", "fastai"])
torch: 1.9.0+cu102
transformers: 4.21.2
fastai: 2.7.9
---
torch: 1.9.0+cu102
transformers: 4.21.2
fastai: 2.7.9

source

set_seed

 set_seed (seed_value:int=42)

This needs to be ran before creating your DataLoaders, before creating your Learner, and before each call to your fit function to help ensure reproducibility.


source

reset_memory

 reset_memory (learn:fastai.learner.Learner=None)

A function which clears gpu memory.

Type Default Details
learn Learner None The fastai learner to delete

Loss functions


source

PreCalculatedMSELoss

 PreCalculatedMSELoss (*args, axis=-1, floatify=True, **kwargs)

If you want to let your Hugging Face model calculate the loss for you, make sure you include the labels argument in your inputs and use PreCalculatedLoss as your loss function. Even though we don’t really need a loss function per se, we have to provide a custom loss class/function for fastai to function properly (e.g. one with a decodes and activation methods). Why? Because these methods will get called in methods like show_results to get the actual predictions.

Note: The Hugging Face models will always calculate the loss for you if you pass a labels dictionary along with your other inputs (so only include it if that is what you intend to happen)


source

PreCalculatedBCELoss

 PreCalculatedBCELoss (*args, axis:int=-1, floatify:bool=True,
                       thresh:float=0.5, weight=None, reduction='mean',
                       pos_weight=None, flatten:bool=True,
                       is_2d:bool=True)

If you want to let your Hugging Face model calculate the loss for you, make sure you include the labels argument in your inputs and use PreCalculatedLoss as your loss function. Even though we don’t really need a loss function per se, we have to provide a custom loss class/function for fastai to function properly (e.g. one with a decodes and activation methods). Why? Because these methods will get called in methods like show_results to get the actual predictions.

Note: The Hugging Face models will always calculate the loss for you if you pass a labels dictionary along with your other inputs (so only include it if that is what you intend to happen)


source

PreCalculatedCrossEntropyLoss

 PreCalculatedCrossEntropyLoss (*args, axis:int=-1, weight=None,
                                ignore_index=-100, reduction='mean',
                                flatten:bool=True, floatify:bool=False,
                                is_2d:bool=True)

If you want to let your Hugging Face model calculate the loss for you, make sure you include the labels argument in your inputs and use PreCalculatedLoss as your loss function. Even though we don’t really need a loss function per se, we have to provide a custom loss class/function for fastai to function properly (e.g. one with a decodes and activation methods). Why? Because these methods will get called in methods like show_results to get the actual predictions.

Note: The Hugging Face models will always calculate the loss for you if you pass a labels dictionary along with your other inputs (so only include it if that is what you intend to happen)


source

PreCalculatedLoss

 PreCalculatedLoss (loss_cls, *args, axis:int=-1, flatten:bool=True,
                    floatify:bool=False, is_2d:bool=True, **kwargs)

If you want to let your Hugging Face model calculate the loss for you, make sure you include the labels argument in your inputs and use PreCalculatedLoss as your loss function. Even though we don’t really need a loss function per se, we have to provide a custom loss class/function for fastai to function properly (e.g. one with a decodes and activation methods). Why? Because these methods will get called in methods like show_results to get the actual predictions.

Note: The Hugging Face models will always calculate the loss for you if you pass a labels dictionary along with your other inputs (so only include it if that is what you intend to happen)

Type Default Details
loss_cls Uninitialized PyTorch-compatible loss
args
axis int -1
flatten bool True
floatify bool False
is_2d bool True
kwargs

source

MultiTargetLoss

 MultiTargetLoss (loss_classes:list[Callable]=[<class
                  'fastai.losses.CrossEntropyLossFlat'>, <class
                  'fastai.losses.CrossEntropyLossFlat'>],
                  loss_classes_kwargs:list[dict]=[{}, {}],
                  weights:list[float]|list[int]=[1, 1],
                  reduction:str='mean')

Provides the ability to apply different loss functions to multi-modal targets/predictions.

This new loss function can be used in many other multi-modal architectures, with any mix of loss functions. For example, this can be ammended to include the is_impossible task, as well as the start/end token tasks in the SQUAD v2 dataset (or in any extractive question/answering task)

Type Default Details
loss_classes list[Callable] [<class ‘fastai.losses.CrossEntropyLossFlat’>, <class ‘fastai.losses.CrossEntropyLossFlat’>] The loss function for each target
loss_classes_kwargs list[dict] [{}, {}] Any kwargs you want to pass to the loss functions above
weights list[float] | list[int] [1, 1] The weights you want to apply to each loss (default: [1,1])
reduction str mean The reduction parameter of the lass function (default: ‘mean’)