Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merge LongRoPE branch #2

Merged
merged 64 commits into from
May 20, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
64 commits
Select commit Hold shift + click to select a range
2d18e95
init
JiahangXu Feb 21, 2024
2270c77
init
JiahangXu Feb 23, 2024
d726ba1
modified: script/ppl_eval/t6/ppl_eval_books3_llama2_t6.sh
Yiyi-philosophy Feb 23, 2024
85c9682
needle
Yiyi-philosophy Feb 26, 2024
81374a1
add
Yiyi-philosophy Feb 26, 2024
1daa3db
edit load pt
Yiyi-philosophy Feb 27, 2024
cce6717
ppl eval proof
Yiyi-philosophy Mar 1, 2024
4248e30
search code clean
Yiyi-philosophy Mar 5, 2024
66ec7b3
load .pt
Yiyi-philosophy Mar 5, 2024
e5082e1
needle test
Yiyi-philosophy Mar 11, 2024
101bee3
needle 2
Mar 13, 2024
0aed562
needle
Mar 13, 2024
291fec4
com
Yiyi-philosophy Mar 14, 2024
940091e
com
Yiyi-philosophy Mar 14, 2024
5e2ae31
Merge branch 'longrope' of github_x:microsoft/LongRoPE into longrope
Yiyi-philosophy Mar 14, 2024
27a32ec
needle bos eos + hf-eval
Yiyi-philosophy Mar 18, 2024
feb3a37
needle ok + mmlu ok
Yiyi-philosophy Mar 19, 2024
3bbaa13
tmp hf eval
Yiyi-philosophy Mar 19, 2024
e9ecf6b
hf_eval req
Yiyi-philosophy Mar 19, 2024
af2dc08
hf-eval
Mar 20, 2024
ecde96f
ignore
Yiyi-philosophy Mar 20, 2024
8f98f05
tmp
Yiyi-philosophy Mar 20, 2024
fbb11b2
Merge branch 'longrope' of github_x:microsoft/LongRoPE into longrope
Yiyi-philosophy Mar 20, 2024
3e44ca0
needle
Yiyi-philosophy Mar 20, 2024
9203c04
Merge branch 'longrope' of github_x:microsoft/LongRoPE into longrope
Yiyi-philosophy Mar 20, 2024
2e25247
needle
Yiyi-philosophy Mar 21, 2024
6f536af
needle
Yiyi-philosophy Mar 21, 2024
93395e6
Merge branch 'longrope' of github_x:microsoft/LongRoPE into longrope
Yiyi-philosophy Mar 21, 2024
151bd56
needle
Yiyi-philosophy Mar 22, 2024
f9cc927
Merge branch 'longrope' of github_x:microsoft/LongRoPE into longrope
Yiyi-philosophy Mar 22, 2024
c48c12b
Merge branch 'longrope' of github_x:microsoft/LongRoPE into longrope
Yiyi-philosophy Mar 22, 2024
c1d9132
support commonsense winogrande and gsm8k eval
JiahangXu Mar 23, 2024
12d56cc
start token
Yiyi-philosophy Mar 24, 2024
dd4b30f
start
Yiyi-philosophy Mar 24, 2024
5f9606f
Merge branch 'longrope' of github_x:microsoft/LongRoPE into longrope
Yiyi-philosophy Mar 24, 2024
c87e1d5
start
Yiyi-philosophy Mar 24, 2024
55a2ce1
pi static search
Yiyi-philosophy Mar 24, 2024
477b954
Merge branch 'longrope' of github_x:microsoft/LongRoPE into longrope
Yiyi-philosophy Mar 24, 2024
ed2394d
hf eval mis-256k
Yiyi-philosophy Mar 24, 2024
91cc772
hf eval get
Mar 24, 2024
d1b7a80
Merge branch 'longrope' of github_x:microsoft/LongRoPE into longrope
Mar 24, 2024
1c8a4cc
hf-eval
Mar 24, 2024
12f6990
la2 bf16
Yiyi-philosophy Mar 26, 2024
a14b7f3
Merge branch 'longrope' of github_x:microsoft/LongRoPE into longrope
Yiyi-philosophy Mar 26, 2024
a3f3a1a
ignore
Apr 4, 2024
9f3c641
change
JiahangXu Apr 7, 2024
24a2750
Merge branch 'longrope' of github_x:microsoft/LongRoPE into longrope
JiahangXu Apr 7, 2024
4b37eac
tmps search
Apr 7, 2024
9afa948
Merge branch 'longrope' of github_x:microsoft/LongRoPE into longrope
Apr 7, 2024
4fff2ea
tmps search
JiahangXu Apr 8, 2024
4848139
books
Yiyi-philosophy Apr 8, 2024
a813811
tmps - hf-eval
Yiyi-philosophy Apr 11, 2024
bd740e7
50003
JiahangXu Apr 24, 2024
6d27158
Merge branch 'longrope' of github_x:microsoft/LongRoPE into longrope
JiahangXu Apr 24, 2024
c38be74
passkey
Yiyi-philosophy Apr 24, 2024
82535b5
Merge branch 'longrope' of github_x:microsoft/LongRoPE into longrope
Yiyi-philosophy Apr 24, 2024
9cc6ea7
512k-pose
JiahangXu Apr 24, 2024
fdebbc6
base pk
Yiyi-philosophy Apr 24, 2024
8ae1761
Merge branch 'longrope' of github_x:microsoft/LongRoPE into longrope
Yiyi-philosophy Apr 24, 2024
1f9ae68
4k eval
Yiyi-philosophy May 6, 2024
27e322a
count dataset
Yiyi-philosophy May 6, 2024
cecb8b3
Merge branch 'longrope' of github_x:microsoft/LongRoPE into longrope
Yiyi-philosophy May 6, 2024
1464f4f
add mandatory files
JiahangXu May 20, 2024
5d6e10a
update codeql config
JiahangXu May 20, 2024
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
The table of contents is too big for display.
Diff view
Diff view
  •  
  •  
  •  
82 changes: 82 additions & 0 deletions .github/workflows/codeql.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,82 @@
# For most projects, this workflow file will not need changing; you simply need
# to commit it to your repository.
#
# You may wish to alter this file to override the set of languages analyzed,
# or to provide custom queries or build logic.
#
# ******** NOTE ********
# We have attempted to detect the languages in your repository. Please check
# the `language` matrix defined below to confirm you have the correct set of
# supported CodeQL languages.
#
name: "CodeQL"

on:
push:
branches: [ "main" ]
pull_request:
branches: [ "main" ]
schedule:
- cron: '30 4 * * 2'

jobs:
analyze:
name: Analyze (${{ matrix.language }})
# Runner size impacts CodeQL analysis time. To learn more, please see:
# - https://gh.io/recommended-hardware-resources-for-running-codeql
# - https://gh.io/supported-runners-and-hardware-resources
# - https://gh.io/using-larger-runners (GitHub.com only)
# Consider using larger runners or machines with greater resources for possible analysis time improvements.
runs-on: ${{ (matrix.language == 'swift' && 'macos-latest') || 'ubuntu-latest' }}
timeout-minutes: ${{ (matrix.language == 'swift' && 120) || 360 }}
permissions:
# required for all workflows
security-events: write

# required to fetch internal or private CodeQL packs
packages: read

# only required for workflows in private repositories
actions: read
contents: read

strategy:
fail-fast: false
matrix:
language: ['python']
steps:
- name: Checkout repository
uses: actions/checkout@v4

# Initializes the CodeQL tools for scanning.
- name: Initialize CodeQL
uses: github/codeql-action/init@v3
with:
languages: ${{ matrix.language }}
build-mode: ${{ matrix.build-mode }}
# If you wish to specify custom queries, you can do so here or in a config file.
# By default, queries listed here will override any specified in a config file.
# Prefix the list here with "+" to use these queries and those in the config file.

# For more details on CodeQL's query packs, refer to: https://docs.github.com/en/code-security/code-scanning/automatically-scanning-your-code-for-vulnerabilities-and-errors/configuring-code-scanning#using-queries-in-ql-packs
# queries: security-extended,security-and-quality

# If the analyze step fails for one of the languages you are analyzing with
# "We were unable to automatically build your code", modify the matrix above
# to set the build mode to "manual" for that language. Then modify this step
# to build your code.
# ℹ️ Command-line programs to run using the OS shell.
# 📚 See https://docs.github.com/en/actions/using-workflows/workflow-syntax-for-github-actions#jobsjob_idstepsrun
- if: matrix.build-mode == 'manual'
run: |
echo 'If you are using a "manual" build mode for one or more of the' \
'languages you are analyzing, replace this with the commands to build' \
'your code, for example:'
echo ' make bootstrap'
echo ' make release'
exit 1

- name: Perform CodeQL Analysis
uses: github/codeql-action/analyze@v3
with:
category: "/language:${{matrix.language}}"
33 changes: 33 additions & 0 deletions .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -3,6 +3,17 @@
##
## Get latest from https://github.com/github/gitignore/blob/main/VisualStudio.gitignore

lm_cache/*
script/hf_benchmark/ft_out_model/*


contexts/*
evaluation/needle/result/*
evaluation/needle/results/*
evaluation/needle/img
evaluation/result/*
lm_cache

# User-specific files
*.rsuser
*.suo
Expand Down Expand Up @@ -396,3 +407,25 @@ FodyWeavers.xsd

# JetBrains Rider
*.sln.iml
evolution/name.sh
evaluation/needle/img/longrope_mis_128k_debug_ant_1_ed.png
lm_cache/s-pi_model--mnt-yiran-teamdrive3-ExtendSeqLen-ft_out_model-cube-mis-128k-bf16-ck-1_400_method-longrope_factor-32_finetuned-false_original_max_position_embeddings-4096_cache_dir-.-cache_dir.db
evaluation/needle/img/longlora-100k.png
evaluation/log_new.txt
evolution/search_result/final-dim_mono-4100-it-4_1_2.csv
evaluation/tmp_re.py
evolution/search_result/final-dim_mono-4100-it-4_1_2.pt
evaluation/log.txt
contexts-1/
gpu_temp.txt
script/ppl_eval/compare*

longrope_*.csv
script/ppl_eval/tmp/proofpile_longrope*.csv

script/ppl_eval/t5/t5_proofpile_*.csv
script/ppl_eval/t6/t6_books3_*.csv
script/hf_benchmark/tmp-search/ft_out_model/
script/hf_benchmark/tmp-search/Llama-2-7b-hf/
script/ppl_eval/t5/4k-pose/*.csv

16 changes: 16 additions & 0 deletions Dockerfile
Original file line number Diff line number Diff line change
@@ -0,0 +1,16 @@
FROM pytorch/pytorch:2.1.0-cuda11.8-cudnn8-devel

WORKDIR /app

COPY ./requirements.txt /install/requirements.txt

RUN nvcc --version

RUN cd /install && pip install -r requirements.txt

# RUN rm -rf /install

RUN apt-get update
RUN apt-get install git
RUN apt-get install vim -y
RUN apt-get install git -y
58 changes: 34 additions & 24 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,33 +1,43 @@
# Project
# LongRoPE

> This repo has been populated by an initial template to help get you started. Please
> make sure to update the content to build a great experience for community-building.
## Build Environment
- conda: needs flash-attn (cuda >= 11.7)
- `conda create -n longrope python==3.11`
- `conda activate longrope`
- `cd s-PI`
- `pip install -r requirements.txt`
- ``

As the maintainer of this project, please make a few updates:
## Eval:

- Improving this README.MD file to provide a great experience
- Updating SUPPORT.MD with content about this project's support experience
- Understanding the security reporting process in SECURITY.MD
- Remove this section from the README
### PPL
- `cd s-PI`
- `bash ./script/ppl_eval/ppl_eval_llama_2.sh`

## Contributing
### Passkey
- `cd s-PI`
- `bash ./script/pk_eval/pk_test_ft_la2.sh`

This project welcomes contributions and suggestions. Most contributions require you to agree to a
Contributor License Agreement (CLA) declaring that you have the right to, and actually do, grant us
the rights to use your contribution. For details, visit https://cla.opensource.microsoft.com.
### HF Eval
- `conda create -n longrope_hf_eval python==3.10.13`
- `conda activate longrope_hf_eval`
- `cd evaluation/lm-evaluation-harness/`
- `pip install -e .`
- `cd ../../`
- `pip install -r requirements.txt`
- `pip install -U datasets`
> Please ignore the Error "tokenizers 0.14.1 requires huggingface_hub<0.18,>=0.16.4, but you have huggingface-hub 0.21.4 which is incompatible.", it isn't effect results
- `pip install flash-attn==2.3.6`

## Search:
### Search the scale for base(Llama2-7b 4k) to 256k sequences
- `cd s-PI`
- `bash ./script/ppl_search/ppl_search_dim_mono_256k.sh`

### Search the scale for LongRoPE-256k to 512k sequences
- `cd s-PI`
- `bash ./script/ppl_search/ppl_search_dim_mono_512k-la2-256k.sh`

When you submit a pull request, a CLA bot will automatically determine whether you need to provide
a CLA and decorate the PR appropriately (e.g., status check, comment). Simply follow the instructions
provided by the bot. You will only need to do this once across all repos using our CLA.

This project has adopted the [Microsoft Open Source Code of Conduct](https://opensource.microsoft.com/codeofconduct/).
For more information see the [Code of Conduct FAQ](https://opensource.microsoft.com/codeofconduct/faq/) or
contact [opencode@microsoft.com](mailto:opencode@microsoft.com) with any additional questions or comments.

## Trademarks

This project may contain trademarks or logos for projects, products, or services. Authorized use of Microsoft
trademarks or logos is subject to and must follow
[Microsoft's Trademark & Brand Guidelines](https://www.microsoft.com/en-us/legal/intellectualproperty/trademarks/usage/general).
Use of Microsoft trademarks or logos in modified versions of this project must not cause confusion or imply Microsoft sponsorship.
Any use of third-party trademarks or logos are subject to those third-party's policies.
184 changes: 184 additions & 0 deletions attention/config_llama_yarn.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,184 @@
# coding=utf-8
# Copyright 2022 EleutherAI and the HuggingFace Inc. team. All rights reserved.
#
# This code is based on EleutherAI's GPT-NeoX library and the GPT-NeoX
# and OPT implementations in this library. It has been modified from its
# original forms to accommodate minor architectural differences compared
# to GPT-NeoX and OPT used by the Meta AI team that trained the model.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
""" LLaMA model configuration"""

from transformers.configuration_utils import PretrainedConfig
from transformers.utils import logging


logger = logging.get_logger(__name__)

LLAMA_PRETRAINED_CONFIG_ARCHIVE_MAP = {}


class LlamaConfig(PretrainedConfig):
r"""
This is the configuration class to store the configuration of a [`LlamaModel`]. It is used to instantiate an LLaMA
model according to the specified arguments, defining the model architecture. Instantiating a configuration with the
defaults will yield a similar configuration to that of the LLaMA-7B.

Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the
documentation from [`PretrainedConfig`] for more information.


Args:
vocab_size (`int`, *optional*, defaults to 32000):
Vocabulary size of the LLaMA model. Defines the number of different tokens that can be represented by the
`inputs_ids` passed when calling [`LlamaModel`]
hidden_size (`int`, *optional*, defaults to 4096):
Dimension of the hidden representations.
intermediate_size (`int`, *optional*, defaults to 11008):
Dimension of the MLP representations.
num_hidden_layers (`int`, *optional*, defaults to 32):
Number of hidden layers in the Transformer encoder.
num_attention_heads (`int`, *optional*, defaults to 32):
Number of attention heads for each attention layer in the Transformer encoder.
num_key_value_heads (`int`, *optional*):
This is the number of key_value heads that should be used to implement Grouped Query Attention. If
`num_key_value_heads=num_attention_heads`, the model will use Multi Head Attention (MHA), if
`num_key_value_heads=1 the model will use Multi Query Attention (MQA) otherwise GQA is used. When
converting a multi-head checkpoint to a GQA checkpoint, each group key and value head should be constructed
by meanpooling all the original heads within that group. For more details checkout [this
paper](https://arxiv.org/pdf/2305.13245.pdf). If it is not specified, will default to
`num_attention_heads`.
pretraining_tp (`int`, *optional*, defaults to `1`):
Experimental feature. Tensor parallelism rank used during pretraining. Please refer to [this
document](https://huggingface.co/docs/transformers/parallelism) to understand more about it. This value is
necessary to ensure exact reproducibility of the pretraining results. Please refer to [this
issue](https://github.com/pytorch/pytorch/issues/76232).
hidden_act (`str` or `function`, *optional*, defaults to `"silu"`):
The non-linear activation function (function or string) in the decoder.
max_position_embeddings (`int`, *optional*, defaults to 2048):
The maximum sequence length that this model might ever be used with. Typically set this to something large
just in case (e.g., 512 or 1024 or 2048).
initializer_range (`float`, *optional*, defaults to 0.02):
The standard deviation of the truncated_normal_initializer for initializing all weight matrices.
rms_norm_eps (`float`, *optional*, defaults to 1e-12):
The epsilon used by the rms normalization layers.
use_cache (`bool`, *optional*, defaults to `True`):
Whether or not the model should return the last key/values attentions (not used by all models). Only
relevant if `config.is_decoder=True`.
tie_word_embeddings(`bool`, *optional*, defaults to `False`):
Whether to tie weight embeddings
rope_scaling (`Dict`, *optional*):
Dictionary containing the scaling configuration for the RoPE embeddings. Currently supports three scaling
strategies: linear and dynamic. Their scaling factor must be an float greater than 1. The expected format
is `{"type": strategy name, "factor": scaling factor}`. When using this flag, don't update
`max_position_embeddings` to the expected new maximum. See the following thread for more information on how
these scaling strategies behave:
https://www.reddit.com/r/LocalLLaMA/comments/14mrgpr/dynamically_scaled_rope_further_increases/. This is an
experimental feature, subject to breaking API changes in future versions.
attention_bias (`bool`, defaults to `False`, *optional*, defaults to `False`):
Whether to use a bias in the query, key, value and output projection layers during self-attention.

Example:

```python
>>> from transformers import LlamaModel, LlamaConfig

>>> # Initializing a LLaMA llama-7b style configuration
>>> configuration = LlamaConfig()

>>> # Initializing a model from the llama-7b style configuration
>>> model = LlamaModel(configuration)

>>> # Accessing the model configuration
>>> configuration = model.config
```"""
model_type = "llama"
keys_to_ignore_at_inference = ["past_key_values"]

def __init__(
self,
vocab_size=32000,
hidden_size=4096,
intermediate_size=11008,
num_hidden_layers=32,
num_attention_heads=32,
num_key_value_heads=None,
hidden_act="silu",
max_position_embeddings=2048,
initializer_range=0.02,
rms_norm_eps=1e-6,
use_cache=True,
pad_token_id=0,
bos_token_id=1,
eos_token_id=2,
pretraining_tp=1,
tie_word_embeddings=False,
rope_theta=10000,
rope_scaling=None,
attention_bias=False,
**kwargs,
):
self.vocab_size = vocab_size
self.max_position_embeddings = max_position_embeddings
self.hidden_size = hidden_size
self.intermediate_size = intermediate_size
self.num_hidden_layers = num_hidden_layers
self.num_attention_heads = num_attention_heads

# for backward compatibility
if num_key_value_heads is None:
num_key_value_heads = num_attention_heads

self.num_key_value_heads = num_key_value_heads
self.hidden_act = hidden_act
self.initializer_range = initializer_range
self.rms_norm_eps = rms_norm_eps
self.pretraining_tp = pretraining_tp
self.use_cache = use_cache
self.rope_theta = rope_theta
self.rope_scaling = rope_scaling
self._rope_scaling_validation()
self.attention_bias = attention_bias

super().__init__(
pad_token_id=pad_token_id,
bos_token_id=bos_token_id,
eos_token_id=eos_token_id,
tie_word_embeddings=tie_word_embeddings,
**kwargs,
)

def _rope_scaling_validation(self):
"""
Validate the `rope_scaling` configuration.
"""
if self.rope_scaling is None:
return

if not isinstance(self.rope_scaling, dict):
raise ValueError(
"`rope_scaling` must be a dictionary, "
f"got {self.rope_scaling}"
)
rope_scaling_type = self.rope_scaling.get("type", None)
rope_scaling_factor = self.rope_scaling.get("factor", None)
if rope_scaling_type is None or rope_scaling_type not in ["linear", "dynamic", "yarn", "dynamic-yarn"]:
raise ValueError(
f"`rope_scaling`'s name field must be one of ['linear', 'dynamic', 'yarn', 'dynamic-yarn'], got {rope_scaling_type}"
)
if rope_scaling_factor is None or not isinstance(rope_scaling_factor, float) or rope_scaling_factor <= 1.0:
raise ValueError(f"`rope_scaling`'s factor field must be an float > 1, got {rope_scaling_factor}")
if rope_scaling_type == "yarn" or rope_scaling_type == "dynamic-yarn":
original_max_position_embeddings = self.rope_scaling.get("original_max_position_embeddings", None)
if original_max_position_embeddings is None or not isinstance(original_max_position_embeddings, int):
raise ValueError(f"`rope_scaling.original_max_position_embeddings` must be set to an int when using yarn, and dynamic-yarn")
Loading
Loading