Skip to content

Commit

Permalink
🔖 @hugginface/inference v2.0.0
Browse files Browse the repository at this point in the history
  • Loading branch information
machineuser committed Apr 19, 2023
1 parent c19bf42 commit 2e648dc
Show file tree
Hide file tree
Showing 45 changed files with 2,321 additions and 917 deletions.
2 changes: 1 addition & 1 deletion README.md
Original file line number Diff line number Diff line change
Expand Up @@ -64,7 +64,7 @@ You can run our packages with vanilla JS, without any bundler, by using a CDN or
```html

<script type="module">
import { HfInference } from 'https://cdn.jsdelivr.net/npm/@huggingface/inference@1.8.0/+esm';
import { HfInference } from 'https://cdn.jsdelivr.net/npm/@huggingface/inference@2.0.0/+esm';
import { createRepo, commit, deleteRepo, listFiles } from "https://cdn.jsdelivr.net/npm/@huggingface/hub@0.5.0/+esm";
</script>
```
Expand Down
70 changes: 34 additions & 36 deletions docs/_toctree.yml
Original file line number Diff line number Diff line change
Expand Up @@ -62,51 +62,49 @@
sections:
- title: HfInference
local: inference/classes/HfInference
- title: Enums
sections:
- title: TextGenerationStreamFinishReason
local: inference/enums/TextGenerationStreamFinishReason
- title: HfInferenceEndpoint
local: inference/classes/HfInferenceEndpoint
- title: Interfaces
sections:
- title: Args
local: inference/interfaces/Args
- title: AudioClassificationReturnValue
local: inference/interfaces/AudioClassificationReturnValue
- title: AutomaticSpeechRecognitionReturn
local: inference/interfaces/AutomaticSpeechRecognitionReturn
- title: ConversationalReturn
local: inference/interfaces/ConversationalReturn
- title: ImageClassificationReturnValue
local: inference/interfaces/ImageClassificationReturnValue
- title: ImageSegmentationReturnValue
local: inference/interfaces/ImageSegmentationReturnValue
- title: ImageToTextReturn
local: inference/interfaces/ImageToTextReturn
- title: ObjectDetectionReturnValue
local: inference/interfaces/ObjectDetectionReturnValue
- title: AudioClassificationOutputValue
local: inference/interfaces/AudioClassificationOutputValue
- title: AutomaticSpeechRecognitionOutput
local: inference/interfaces/AutomaticSpeechRecognitionOutput
- title: BaseArgs
local: inference/interfaces/BaseArgs
- title: ConversationalOutput
local: inference/interfaces/ConversationalOutput
- title: ImageClassificationOutputValue
local: inference/interfaces/ImageClassificationOutputValue
- title: ImageSegmentationOutputValue
local: inference/interfaces/ImageSegmentationOutputValue
- title: ImageToTextOutput
local: inference/interfaces/ImageToTextOutput
- title: ObjectDetectionOutputValue
local: inference/interfaces/ObjectDetectionOutputValue
- title: Options
local: inference/interfaces/Options
- title: QuestionAnswerReturn
local: inference/interfaces/QuestionAnswerReturn
- title: SummarizationReturn
local: inference/interfaces/SummarizationReturn
- title: TableQuestionAnswerReturn
local: inference/interfaces/TableQuestionAnswerReturn
- title: TextGenerationReturn
local: inference/interfaces/TextGenerationReturn
- title: QuestionAnsweringOutput
local: inference/interfaces/QuestionAnsweringOutput
- title: SummarizationOutput
local: inference/interfaces/SummarizationOutput
- title: TableQuestionAnsweringOutput
local: inference/interfaces/TableQuestionAnsweringOutput
- title: TextGenerationOutput
local: inference/interfaces/TextGenerationOutput
- title: TextGenerationStreamBestOfSequence
local: inference/interfaces/TextGenerationStreamBestOfSequence
- title: TextGenerationStreamDetails
local: inference/interfaces/TextGenerationStreamDetails
- title: TextGenerationStreamOutput
local: inference/interfaces/TextGenerationStreamOutput
- title: TextGenerationStreamPrefillToken
local: inference/interfaces/TextGenerationStreamPrefillToken
- title: TextGenerationStreamReturn
local: inference/interfaces/TextGenerationStreamReturn
- title: TextGenerationStreamToken
local: inference/interfaces/TextGenerationStreamToken
- title: TokenClassificationReturnValue
local: inference/interfaces/TokenClassificationReturnValue
- title: TranslationReturn
local: inference/interfaces/TranslationReturn
- title: ZeroShotClassificationReturnValue
local: inference/interfaces/ZeroShotClassificationReturnValue
- title: TokenClassificationOutputValue
local: inference/interfaces/TokenClassificationOutputValue
- title: TranslationOutput
local: inference/interfaces/TranslationOutput
- title: ZeroShotClassificationOutputValue
local: inference/interfaces/ZeroShotClassificationOutputValue
84 changes: 57 additions & 27 deletions docs/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -9,12 +9,28 @@
<br/>
</p>

```ts
await inference.translation({
model: 't5-base',
inputs: 'My name is Wolfgang and I live in Berlin'
})

await inference.textToImage({
model: 'stabilityai/stable-diffusion-2',
inputs: 'award winning high resolution photo of a giant tortoise/((ladybird)) hybrid, [trending on artstation]',
parameters: {
negative_prompt: 'blurry',
}
})
```

# Hugging Face JS libraries

This is a collection of JS libraries to interact with the Hugging Face API, with TS types included.

- [@huggingface/inference](inference/README): Use the Inference API to make calls to 100,000+ Machine Learning models, or your own [inference endpoints](https://hf.co/docs/inference-endpoints/)!
- [@huggingface/hub](hub/README): Interact with huggingface.co to create or delete repos and commit / download files
- [@huggingface/inference](inference/README): Use the Inference API to make calls to 100,000+ Machine Learning models!


With more to come, like `@huggingface/endpoints` to manage your HF Endpoints!

Expand All @@ -29,15 +45,15 @@ The libraries are still very young, please help us by opening issues!
To install via NPM, you can download the libraries as needed:

```bash
npm install @huggingface/hub
npm install @huggingface/inference
npm install @huggingface/hub
```

Then import the libraries in your code:

```ts
import { createRepo, commit, deleteRepo, listFiles } from "@huggingface/hub";
import { HfInference } from "@huggingface/inference";
import { createRepo, commit, deleteRepo, listFiles } from "@huggingface/hub";
import type { RepoId, Credentials } from "@huggingface/hub";
```

Expand All @@ -48,18 +64,52 @@ You can run our packages with vanilla JS, without any bundler, by using a CDN or
```html

<script type="module">
import { HfInference } from 'https://cdn.jsdelivr.net/npm/@huggingface/inference@1.8.0/+esm';
import { HfInference } from 'https://cdn.jsdelivr.net/npm/@huggingface/inference@2.0.0/+esm';
import { createRepo, commit, deleteRepo, listFiles } from "https://cdn.jsdelivr.net/npm/@huggingface/hub@0.5.0/+esm";
</script>
```

## Usage example
## Usage examples

Get your HF access token in your [account settings](https://huggingface.co/settings/tokens).

### @huggingface/inference examples

```ts
import { createRepo, uploadFile, deleteFiles } from "@huggingface/hub";
import { HfInference } from "@huggingface/inference";

// use an access token from your free account
const HF_ACCESS_TOKEN = "hf_...";

const inference = new HfInference(HF_ACCESS_TOKEN);

await inference.translation({
model: 't5-base',
inputs: 'My name is Wolfgang and I live in Berlin'
})

await inference.textToImage({
model: 'stabilityai/stable-diffusion-2',
inputs: 'award winning high resolution photo of a giant tortoise/((ladybird)) hybrid, [trending on artstation]',
parameters: {
negative_prompt: 'blurry',
}
})

await inference.imageToText({
data: await (await fetch('https://picsum.photos/300/300')).blob(),
model: 'nlpconnect/vit-gpt2-image-captioning',
})

// Using your own inference endpoint: https://hf.co/docs/inference-endpoints/
const gpt2 = inference.endpoint('https://xyz.eu-west-1.aws.endpoints.huggingface.cloud/gpt2');
const { generated_text } = await gpt2.textGeneration({inputs: 'The answer to the universe is'});
```

### @huggingface/hub examples

```ts
import { createRepo, uploadFile, deleteFiles } from "@huggingface/hub";

const HF_ACCESS_TOKEN = "hf_...";

await createRepo({
Expand All @@ -82,26 +132,6 @@ await deleteFiles({
credentials: {accessToken: HF_ACCESS_TOKEN},
paths: ["README.md", ".gitattributes"]
});

const inference = new HfInference(HF_ACCESS_TOKEN);

await inference.translation({
model: 't5-base',
inputs: 'My name is Wolfgang and I live in Berlin'
})

await inference.textToImage({
inputs: 'award winning high resolution photo of a giant tortoise/((ladybird)) hybrid, [trending on artstation]',
model: 'stabilityai/stable-diffusion-2',
parameters: {
negative_prompt: 'blurry',
}
})

await inference.imageToText({
data: await (await fetch('https://picsum.photos/300/300')).blob(),
model: 'nlpconnect/vit-gpt2-image-captioning',
})
```

There are more features of course, check each library's README!
Expand Down
63 changes: 56 additions & 7 deletions docs/inference/README.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,8 @@
# 🤗 Hugging Face Inference API

A Typescript powered wrapper for the Hugging Face Inference API. Learn more about the Inference API at [Hugging Face](https://huggingface.co/docs/api-inference/index).
A Typescript powered wrapper for the Hugging Face Inference API. Learn more about the Inference API at [Hugging Face](https://huggingface.co/docs/api-inference/index). It also works with [Inference Endpoints](https://huggingface.co/docs/inference-endpoints/index).

You can also try out a live [interactive notebook](https://observablehq.com/@huggingface/hello-huggingface-js-inference) or see some demos on [hf.co/huggingfacejs](https://huggingface.co/huggingfacejs).

## Install

Expand All @@ -14,16 +16,16 @@ pnpm add @huggingface/inference

## Usage

**Important note:** Using an API key is optional to get started, however you will be rate limited eventually. Join [Hugging Face](https://huggingface.co/join) and then visit [access tokens](https://huggingface.co/settings/tokens) to generate your API key for **free**.
**Important note:** Using an access token is optional to get started, however you will be rate limited eventually. Join [Hugging Face](https://huggingface.co/join) and then visit [access tokens](https://huggingface.co/settings/tokens) to generate your access token for **free**.

Your API key should be kept private. If you need to protect it in front-end applications, we suggest setting up a proxy server that stores the API key.
Your access token should be kept private. If you need to protect it in front-end applications, we suggest setting up a proxy server that stores the access token.

### Basic examples

```typescript
import { HfInference } from '@huggingface/inference'

const hf = new HfInference('your api key')
const hf = new HfInference('your access token')

// Natural Language

Expand All @@ -41,15 +43,15 @@ await hf.summarization({
}
})

await hf.questionAnswer({
await hf.questionAnswering({
model: 'deepset/roberta-base-squad2',
inputs: {
question: 'What is the capital of France?',
context: 'The capital of France is Paris.'
}
})

await hf.tableQuestionAnswer({
await hf.tableQuestionAnswering({
model: 'google/tapas-base-finetuned-wtq',
inputs: {
query: 'How many stars does the transformers repository have?',
Expand Down Expand Up @@ -107,7 +109,7 @@ await hf.conversational({
}
})

await hf.featureExtraction({
await hf.sentenceSimilarity({
model: 'sentence-transformers/paraphrase-xlm-r-multilingual-v1',
inputs: {
source_sentence: 'That is a happy person',
Expand All @@ -119,6 +121,11 @@ await hf.featureExtraction({
}
})

await hf.featureExtraction({
model: "sentence-transformers/distilbert-base-nli-mean-tokens",
inputs: "That is a happy person",
});

// Audio

await hf.automaticSpeechRecognition({
Expand Down Expand Up @@ -160,6 +167,30 @@ await hf.imageToText({
data: readFileSync('test/cats.png'),
model: 'nlpconnect/vit-gpt2-image-captioning'
})

// Custom call, for models with custom parameters / outputs
await hf.request({
model: 'my-custom-model',
inputs: 'hello world',
parameters: {
custom_param: 'some magic',
}
})

// Custom streaming call, for models with custom parameters / outputs
for await (const output of hf.streamingRequest({
model: 'my-custom-model',
inputs: 'hello world',
parameters: {
custom_param: 'some magic',
}
})) {
...
}

// Using your own inference endpoint: https://hf.co/docs/inference-endpoints/
const gpt2 = hf.endpoint('https://xyz.eu-west-1.aws.endpoints.huggingface.cloud/gpt2');
const { generated_text } = await gpt2.textGeneration({inputs: 'The answer to the universe is'});
```

## Supported Tasks
Expand All @@ -179,6 +210,7 @@ await hf.imageToText({
- [x] Zero-shot classification
- [x] Conversational
- [x] Feature extraction
- [x] Sentence Similarity

### Audio

Expand All @@ -193,6 +225,23 @@ await hf.imageToText({
- [x] Text to image
- [x] Image to text

## Tree-shaking

You can import the functions you need directly from the module, rather than using the `HfInference` class:

```ts
import {textGeneration} from "@huggingface/inference";

await textGeneration({
accessToken: "hf_...",
model: "model_or_endpoint",
inputs: ...,
parameters: ...
})
```

This will enable tree-shaking by your bundler.

## Running tests

```console
Expand Down
Loading

0 comments on commit 2e648dc

Please sign in to comment.