File size: 5,428 Bytes
03cfcb7 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 |
{
"cells": [
{
"cell_type": "code",
"execution_count": 1,
"id": "0683c00e-bd2d-430f-a4f0-629457bfcc0c",
"metadata": {},
"outputs": [
{
"data": {
"application/vnd.jupyter.widget-view+json": {
"model_id": "fbcb8ef4a4754556991458d17072576a",
"version_major": 2,
"version_minor": 0
},
"text/plain": [
"VBox(children=(HTML(value='<center> <img\\nsrc=https://huggingface.co/front/assets/huggingface_logo-noborder.sv…"
]
},
"metadata": {},
"output_type": "display_data"
}
],
"source": [
"from huggingface_hub import notebook_login\n",
"notebook_login()"
]
},
{
"cell_type": "code",
"execution_count": 8,
"id": "ad5d9515-4dee-4b05-abe1-0a60ebbddf64",
"metadata": {},
"outputs": [
{
"data": {
"application/vnd.jupyter.widget-view+json": {
"model_id": "5622f3d576d54360b81f6a35dbc4fc76",
"version_major": 2,
"version_minor": 0
},
"text/plain": [
"Loading checkpoint shards: 0%| | 0/2 [00:00<?, ?it/s]"
]
},
"metadata": {},
"output_type": "display_data"
}
],
"source": [
"from transformers import AutoTokenizer\n",
"import transformers\n",
"import torch\n",
"\n",
"model = \"TokenBender/llama2-7b-chat-hf-codeCherryPop-qLoRA-merged\"\n",
"\n",
"tokenizer = AutoTokenizer.from_pretrained(model)\n",
"pipeline = transformers.pipeline(\n",
" \"text-generation\",\n",
" model=model,\n",
" torch_dtype=torch.float16,\n",
" device_map=\"auto\",\n",
")"
]
},
{
"cell_type": "code",
"execution_count": 9,
"id": "855e2272-1e42-46ad-9cab-9bed6a7fc6fc",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Result: [INST]<<SYS>>\n",
" You are a helpful coding assistant that provides code based on the given query in context. \n",
"<</SYS>>\n",
"Write a python program to perform binary search in a given list.[/INST] Of course! Here is a basic implementation of a binary search algorithm in Python:\n",
"```\n",
"def binary_search(list, target):\n",
" # Initialize the indices of the left and right halves of the list\n",
" low = 0\n",
" high = len(list) - 1\n",
"\n",
" # While the list has not been searched thoroughly\n",
" while low < high:\n",
" # Calculate the index of the middle element of the list\n",
" mid = (low + high) // 2\n",
"\n",
" # Check whether the target is equal to the middle element\n",
" if list[mid] == target:\n",
" # Return the index of the target\n",
" return mid\n",
"\n",
" # If the target is less than the middle element, move to the right\n",
" elif list[mid] < target:\n",
" high = mid - 1\n",
" # If the target is greater than the middle element, move to the left\n",
" else:\n",
" low = mid + 1\n",
"\n",
" # If we reach this point, the target was not found in the list\n",
" return -1\n",
"\n",
"# Example usage\n",
"list = [1, 2, 3, 4, 5, 6, 7, 8, 9, 10]\n",
"target = 5\n",
"print(binary_search(list, target))\n",
"```\n",
"This code uses a simple loop to perform the binary search. It iterates through the list, comparing each element to the target, and adjusting the position of the current element based on the result of the comparison. At the end of the loop, it returns the index of the target element if it is found, or -1 if it is not found.\n",
"You can test the code with your own list by replacing the `list` variable with a list of your own, and calling the `binary_search` function with the target element you want to search for. For example, if your list is `['apple', 'banana', 'orange', 'banana', 'orange', 'apple']`, and you want to search for the element 'banana', you can call the function like this: `print(binary_search(list, 'banana')`.\n"
]
}
],
"source": [
"conversation_text = '''[INST]<<SYS>>\n",
" You are a helpful coding assistant that provides code based on the given query in context. \n",
"<</SYS>>\n",
"Write a python program to perform binary search in a given list.[/INST]'''\n",
"\n",
"sequences = pipeline(\n",
" conversation_text,\n",
" do_sample=True,\n",
" top_k=10,\n",
" num_return_sequences=1,\n",
" eos_token_id=tokenizer.eos_token_id,\n",
" max_length=2048,\n",
")\n",
"for seq in sequences:\n",
" print(f\"Result: {seq['generated_text']}\")"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "e85dd028-e20d-496d-a5ac-6835ecb3e212",
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.10.6"
}
},
"nbformat": 4,
"nbformat_minor": 5
}
|