repo_name
stringclasses 1
value | pr_number
int64 4.12k
11.2k
| pr_title
stringlengths 9
107
| pr_description
stringlengths 107
5.48k
| author
stringlengths 4
18
| date_created
unknown | date_merged
unknown | previous_commit
stringlengths 40
40
| pr_commit
stringlengths 40
40
| query
stringlengths 118
5.52k
| before_content
stringlengths 0
7.93M
| after_content
stringlengths 0
7.93M
| label
int64 -1
1
|
---|---|---|---|---|---|---|---|---|---|---|---|---|
TheAlgorithms/Python | 9,707 | Fix mypy errors in circular_linked_list.py and swap_nodes.py | ### Describe your change:
Fixes #9710
Fix mypy errors in `data_structures/linked_list/circular_linked_list.py` and `data_structures/linked_list/swap_nodes.py` that were let through in #9668. These errors are causing every new PR to fail the pre-commit check.
My fixes primarily consisted of `assert` statements and type hints in order to convince mypy of (implicitly understood) type guarantees.
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
| tianyizheng02 | "2023-10-04T14:51:39Z" | "2023-10-04T16:05:01Z" | 3fd3497f15982a7286326b520b5e7b52767da1f3 | dfdd78135df938d948ba3044aca628aca08886e7 | Fix mypy errors in circular_linked_list.py and swap_nodes.py. ### Describe your change:
Fixes #9710
Fix mypy errors in `data_structures/linked_list/circular_linked_list.py` and `data_structures/linked_list/swap_nodes.py` that were let through in #9668. These errors are causing every new PR to fail the pre-commit check.
My fixes primarily consisted of `assert` statements and type hints in order to convince mypy of (implicitly understood) type guarantees.
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
| """Matrix Exponentiation"""
import timeit
"""
Matrix Exponentiation is a technique to solve linear recurrences in logarithmic time.
You read more about it here:
https://zobayer.blogspot.com/2010/11/matrix-exponentiation.html
https://www.hackerearth.com/practice/notes/matrix-exponentiation-1/
"""
class Matrix:
def __init__(self, arg):
if isinstance(arg, list): # Initializes a matrix identical to the one provided.
self.t = arg
self.n = len(arg)
else: # Initializes a square matrix of the given size and set values to zero.
self.n = arg
self.t = [[0 for _ in range(self.n)] for _ in range(self.n)]
def __mul__(self, b):
matrix = Matrix(self.n)
for i in range(self.n):
for j in range(self.n):
for k in range(self.n):
matrix.t[i][j] += self.t[i][k] * b.t[k][j]
return matrix
def modular_exponentiation(a, b):
matrix = Matrix([[1, 0], [0, 1]])
while b > 0:
if b & 1:
matrix *= a
a *= a
b >>= 1
return matrix
def fibonacci_with_matrix_exponentiation(n, f1, f2):
# Trivial Cases
if n == 1:
return f1
elif n == 2:
return f2
matrix = Matrix([[1, 1], [1, 0]])
matrix = modular_exponentiation(matrix, n - 2)
return f2 * matrix.t[0][0] + f1 * matrix.t[0][1]
def simple_fibonacci(n, f1, f2):
# Trivial Cases
if n == 1:
return f1
elif n == 2:
return f2
fn_1 = f1
fn_2 = f2
n -= 2
while n > 0:
fn_1, fn_2 = fn_1 + fn_2, fn_1
n -= 1
return fn_1
def matrix_exponentiation_time():
setup = """
from random import randint
from __main__ import fibonacci_with_matrix_exponentiation
"""
code = "fibonacci_with_matrix_exponentiation(randint(1,70000), 1, 1)"
exec_time = timeit.timeit(setup=setup, stmt=code, number=100)
print("With matrix exponentiation the average execution time is ", exec_time / 100)
return exec_time
def simple_fibonacci_time():
setup = """
from random import randint
from __main__ import simple_fibonacci
"""
code = "simple_fibonacci(randint(1,70000), 1, 1)"
exec_time = timeit.timeit(setup=setup, stmt=code, number=100)
print(
"Without matrix exponentiation the average execution time is ", exec_time / 100
)
return exec_time
def main():
matrix_exponentiation_time()
simple_fibonacci_time()
if __name__ == "__main__":
main()
| """Matrix Exponentiation"""
import timeit
"""
Matrix Exponentiation is a technique to solve linear recurrences in logarithmic time.
You read more about it here:
https://zobayer.blogspot.com/2010/11/matrix-exponentiation.html
https://www.hackerearth.com/practice/notes/matrix-exponentiation-1/
"""
class Matrix:
def __init__(self, arg):
if isinstance(arg, list): # Initializes a matrix identical to the one provided.
self.t = arg
self.n = len(arg)
else: # Initializes a square matrix of the given size and set values to zero.
self.n = arg
self.t = [[0 for _ in range(self.n)] for _ in range(self.n)]
def __mul__(self, b):
matrix = Matrix(self.n)
for i in range(self.n):
for j in range(self.n):
for k in range(self.n):
matrix.t[i][j] += self.t[i][k] * b.t[k][j]
return matrix
def modular_exponentiation(a, b):
matrix = Matrix([[1, 0], [0, 1]])
while b > 0:
if b & 1:
matrix *= a
a *= a
b >>= 1
return matrix
def fibonacci_with_matrix_exponentiation(n, f1, f2):
# Trivial Cases
if n == 1:
return f1
elif n == 2:
return f2
matrix = Matrix([[1, 1], [1, 0]])
matrix = modular_exponentiation(matrix, n - 2)
return f2 * matrix.t[0][0] + f1 * matrix.t[0][1]
def simple_fibonacci(n, f1, f2):
# Trivial Cases
if n == 1:
return f1
elif n == 2:
return f2
fn_1 = f1
fn_2 = f2
n -= 2
while n > 0:
fn_1, fn_2 = fn_1 + fn_2, fn_1
n -= 1
return fn_1
def matrix_exponentiation_time():
setup = """
from random import randint
from __main__ import fibonacci_with_matrix_exponentiation
"""
code = "fibonacci_with_matrix_exponentiation(randint(1,70000), 1, 1)"
exec_time = timeit.timeit(setup=setup, stmt=code, number=100)
print("With matrix exponentiation the average execution time is ", exec_time / 100)
return exec_time
def simple_fibonacci_time():
setup = """
from random import randint
from __main__ import simple_fibonacci
"""
code = "simple_fibonacci(randint(1,70000), 1, 1)"
exec_time = timeit.timeit(setup=setup, stmt=code, number=100)
print(
"Without matrix exponentiation the average execution time is ", exec_time / 100
)
return exec_time
def main():
matrix_exponentiation_time()
simple_fibonacci_time()
if __name__ == "__main__":
main()
| -1 |
TheAlgorithms/Python | 9,707 | Fix mypy errors in circular_linked_list.py and swap_nodes.py | ### Describe your change:
Fixes #9710
Fix mypy errors in `data_structures/linked_list/circular_linked_list.py` and `data_structures/linked_list/swap_nodes.py` that were let through in #9668. These errors are causing every new PR to fail the pre-commit check.
My fixes primarily consisted of `assert` statements and type hints in order to convince mypy of (implicitly understood) type guarantees.
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
| tianyizheng02 | "2023-10-04T14:51:39Z" | "2023-10-04T16:05:01Z" | 3fd3497f15982a7286326b520b5e7b52767da1f3 | dfdd78135df938d948ba3044aca628aca08886e7 | Fix mypy errors in circular_linked_list.py and swap_nodes.py. ### Describe your change:
Fixes #9710
Fix mypy errors in `data_structures/linked_list/circular_linked_list.py` and `data_structures/linked_list/swap_nodes.py` that were let through in #9668. These errors are causing every new PR to fail the pre-commit check.
My fixes primarily consisted of `assert` statements and type hints in order to convince mypy of (implicitly understood) type guarantees.
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
| # Conversion
Conversion programs convert a type of data, a number from a numerical base or unit into one of another type, base or unit, e.g. binary to decimal, integer to string or foot to meters.
* <https://en.wikipedia.org/wiki/Data_conversion>
* <https://en.wikipedia.org/wiki/Transcoding>
| # Conversion
Conversion programs convert a type of data, a number from a numerical base or unit into one of another type, base or unit, e.g. binary to decimal, integer to string or foot to meters.
* <https://en.wikipedia.org/wiki/Data_conversion>
* <https://en.wikipedia.org/wiki/Transcoding>
| -1 |
TheAlgorithms/Python | 9,707 | Fix mypy errors in circular_linked_list.py and swap_nodes.py | ### Describe your change:
Fixes #9710
Fix mypy errors in `data_structures/linked_list/circular_linked_list.py` and `data_structures/linked_list/swap_nodes.py` that were let through in #9668. These errors are causing every new PR to fail the pre-commit check.
My fixes primarily consisted of `assert` statements and type hints in order to convince mypy of (implicitly understood) type guarantees.
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
| tianyizheng02 | "2023-10-04T14:51:39Z" | "2023-10-04T16:05:01Z" | 3fd3497f15982a7286326b520b5e7b52767da1f3 | dfdd78135df938d948ba3044aca628aca08886e7 | Fix mypy errors in circular_linked_list.py and swap_nodes.py. ### Describe your change:
Fixes #9710
Fix mypy errors in `data_structures/linked_list/circular_linked_list.py` and `data_structures/linked_list/swap_nodes.py` that were let through in #9668. These errors are causing every new PR to fail the pre-commit check.
My fixes primarily consisted of `assert` statements and type hints in order to convince mypy of (implicitly understood) type guarantees.
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
| -1 |
||
TheAlgorithms/Python | 9,707 | Fix mypy errors in circular_linked_list.py and swap_nodes.py | ### Describe your change:
Fixes #9710
Fix mypy errors in `data_structures/linked_list/circular_linked_list.py` and `data_structures/linked_list/swap_nodes.py` that were let through in #9668. These errors are causing every new PR to fail the pre-commit check.
My fixes primarily consisted of `assert` statements and type hints in order to convince mypy of (implicitly understood) type guarantees.
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
| tianyizheng02 | "2023-10-04T14:51:39Z" | "2023-10-04T16:05:01Z" | 3fd3497f15982a7286326b520b5e7b52767da1f3 | dfdd78135df938d948ba3044aca628aca08886e7 | Fix mypy errors in circular_linked_list.py and swap_nodes.py. ### Describe your change:
Fixes #9710
Fix mypy errors in `data_structures/linked_list/circular_linked_list.py` and `data_structures/linked_list/swap_nodes.py` that were let through in #9668. These errors are causing every new PR to fail the pre-commit check.
My fixes primarily consisted of `assert` statements and type hints in order to convince mypy of (implicitly understood) type guarantees.
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
| """
Project Euler Problem 38: https://projecteuler.net/problem=38
Take the number 192 and multiply it by each of 1, 2, and 3:
192 Γ 1 = 192
192 Γ 2 = 384
192 Γ 3 = 576
By concatenating each product we get the 1 to 9 pandigital, 192384576. We will call
192384576 the concatenated product of 192 and (1,2,3)
The same can be achieved by starting with 9 and multiplying by 1, 2, 3, 4, and 5,
giving the pandigital, 918273645, which is the concatenated product of 9 and
(1,2,3,4,5).
What is the largest 1 to 9 pandigital 9-digit number that can be formed as the
concatenated product of an integer with (1,2, ... , n) where n > 1?
Solution:
Since n>1, the largest candidate for the solution will be a concactenation of
a 4-digit number and its double, a 5-digit number.
Let a be the 4-digit number.
a has 4 digits => 1000 <= a < 10000
2a has 5 digits => 10000 <= 2a < 100000
=> 5000 <= a < 10000
The concatenation of a with 2a = a * 10^5 + 2a
so our candidate for a given a is 100002 * a.
We iterate through the search space 5000 <= a < 10000 in reverse order,
calculating the candidates for each a and checking if they are 1-9 pandigital.
In case there are no 4-digit numbers that satisfy this property, we check
the 3-digit numbers with a similar formula (the example a=192 gives a lower
bound on the length of a):
a has 3 digits, etc...
=> 100 <= a < 334, candidate = a * 10^6 + 2a * 10^3 + 3a
= 1002003 * a
"""
from __future__ import annotations
def is_9_pandigital(n: int) -> bool:
"""
Checks whether n is a 9-digit 1 to 9 pandigital number.
>>> is_9_pandigital(12345)
False
>>> is_9_pandigital(156284973)
True
>>> is_9_pandigital(1562849733)
False
"""
s = str(n)
return len(s) == 9 and set(s) == set("123456789")
def solution() -> int | None:
"""
Return the largest 1 to 9 pandigital 9-digital number that can be formed as the
concatenated product of an integer with (1,2,...,n) where n > 1.
"""
for base_num in range(9999, 4999, -1):
candidate = 100002 * base_num
if is_9_pandigital(candidate):
return candidate
for base_num in range(333, 99, -1):
candidate = 1002003 * base_num
if is_9_pandigital(candidate):
return candidate
return None
if __name__ == "__main__":
print(f"{solution() = }")
| """
Project Euler Problem 38: https://projecteuler.net/problem=38
Take the number 192 and multiply it by each of 1, 2, and 3:
192 Γ 1 = 192
192 Γ 2 = 384
192 Γ 3 = 576
By concatenating each product we get the 1 to 9 pandigital, 192384576. We will call
192384576 the concatenated product of 192 and (1,2,3)
The same can be achieved by starting with 9 and multiplying by 1, 2, 3, 4, and 5,
giving the pandigital, 918273645, which is the concatenated product of 9 and
(1,2,3,4,5).
What is the largest 1 to 9 pandigital 9-digit number that can be formed as the
concatenated product of an integer with (1,2, ... , n) where n > 1?
Solution:
Since n>1, the largest candidate for the solution will be a concactenation of
a 4-digit number and its double, a 5-digit number.
Let a be the 4-digit number.
a has 4 digits => 1000 <= a < 10000
2a has 5 digits => 10000 <= 2a < 100000
=> 5000 <= a < 10000
The concatenation of a with 2a = a * 10^5 + 2a
so our candidate for a given a is 100002 * a.
We iterate through the search space 5000 <= a < 10000 in reverse order,
calculating the candidates for each a and checking if they are 1-9 pandigital.
In case there are no 4-digit numbers that satisfy this property, we check
the 3-digit numbers with a similar formula (the example a=192 gives a lower
bound on the length of a):
a has 3 digits, etc...
=> 100 <= a < 334, candidate = a * 10^6 + 2a * 10^3 + 3a
= 1002003 * a
"""
from __future__ import annotations
def is_9_pandigital(n: int) -> bool:
"""
Checks whether n is a 9-digit 1 to 9 pandigital number.
>>> is_9_pandigital(12345)
False
>>> is_9_pandigital(156284973)
True
>>> is_9_pandigital(1562849733)
False
"""
s = str(n)
return len(s) == 9 and set(s) == set("123456789")
def solution() -> int | None:
"""
Return the largest 1 to 9 pandigital 9-digital number that can be formed as the
concatenated product of an integer with (1,2,...,n) where n > 1.
"""
for base_num in range(9999, 4999, -1):
candidate = 100002 * base_num
if is_9_pandigital(candidate):
return candidate
for base_num in range(333, 99, -1):
candidate = 1002003 * base_num
if is_9_pandigital(candidate):
return candidate
return None
if __name__ == "__main__":
print(f"{solution() = }")
| -1 |
TheAlgorithms/Python | 9,707 | Fix mypy errors in circular_linked_list.py and swap_nodes.py | ### Describe your change:
Fixes #9710
Fix mypy errors in `data_structures/linked_list/circular_linked_list.py` and `data_structures/linked_list/swap_nodes.py` that were let through in #9668. These errors are causing every new PR to fail the pre-commit check.
My fixes primarily consisted of `assert` statements and type hints in order to convince mypy of (implicitly understood) type guarantees.
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
| tianyizheng02 | "2023-10-04T14:51:39Z" | "2023-10-04T16:05:01Z" | 3fd3497f15982a7286326b520b5e7b52767da1f3 | dfdd78135df938d948ba3044aca628aca08886e7 | Fix mypy errors in circular_linked_list.py and swap_nodes.py. ### Describe your change:
Fixes #9710
Fix mypy errors in `data_structures/linked_list/circular_linked_list.py` and `data_structures/linked_list/swap_nodes.py` that were let through in #9668. These errors are causing every new PR to fail the pre-commit check.
My fixes primarily consisted of `assert` statements and type hints in order to convince mypy of (implicitly understood) type guarantees.
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
| """Convert a Decimal Number to a Binary Number."""
def decimal_to_binary_iterative(num: int) -> str:
"""
Convert an Integer Decimal Number to a Binary Number as str.
>>> decimal_to_binary_iterative(0)
'0b0'
>>> decimal_to_binary_iterative(2)
'0b10'
>>> decimal_to_binary_iterative(7)
'0b111'
>>> decimal_to_binary_iterative(35)
'0b100011'
>>> # negatives work too
>>> decimal_to_binary_iterative(-2)
'-0b10'
>>> # other floats will error
>>> decimal_to_binary_iterative(16.16) # doctest: +ELLIPSIS
Traceback (most recent call last):
...
TypeError: 'float' object cannot be interpreted as an integer
>>> # strings will error as well
>>> decimal_to_binary_iterative('0xfffff') # doctest: +ELLIPSIS
Traceback (most recent call last):
...
TypeError: 'str' object cannot be interpreted as an integer
"""
if isinstance(num, float):
raise TypeError("'float' object cannot be interpreted as an integer")
if isinstance(num, str):
raise TypeError("'str' object cannot be interpreted as an integer")
if num == 0:
return "0b0"
negative = False
if num < 0:
negative = True
num = -num
binary: list[int] = []
while num > 0:
binary.insert(0, num % 2)
num >>= 1
if negative:
return "-0b" + "".join(str(e) for e in binary)
return "0b" + "".join(str(e) for e in binary)
def decimal_to_binary_recursive_helper(decimal: int) -> str:
"""
Take a positive integer value and return its binary equivalent.
>>> decimal_to_binary_recursive_helper(1000)
'1111101000'
>>> decimal_to_binary_recursive_helper("72")
'1001000'
>>> decimal_to_binary_recursive_helper("number")
Traceback (most recent call last):
...
ValueError: invalid literal for int() with base 10: 'number'
"""
decimal = int(decimal)
if decimal in (0, 1): # Exit cases for the recursion
return str(decimal)
div, mod = divmod(decimal, 2)
return decimal_to_binary_recursive_helper(div) + str(mod)
def decimal_to_binary_recursive(number: str) -> str:
"""
Take an integer value and raise ValueError for wrong inputs,
call the function above and return the output with prefix "0b" & "-0b"
for positive and negative integers respectively.
>>> decimal_to_binary_recursive(0)
'0b0'
>>> decimal_to_binary_recursive(40)
'0b101000'
>>> decimal_to_binary_recursive(-40)
'-0b101000'
>>> decimal_to_binary_recursive(40.8)
Traceback (most recent call last):
...
ValueError: Input value is not an integer
>>> decimal_to_binary_recursive("forty")
Traceback (most recent call last):
...
ValueError: Input value is not an integer
"""
number = str(number).strip()
if not number:
raise ValueError("No input value was provided")
negative = "-" if number.startswith("-") else ""
number = number.lstrip("-")
if not number.isnumeric():
raise ValueError("Input value is not an integer")
return f"{negative}0b{decimal_to_binary_recursive_helper(int(number))}"
if __name__ == "__main__":
import doctest
doctest.testmod()
print(decimal_to_binary_recursive(input("Input a decimal number: ")))
| """Convert a Decimal Number to a Binary Number."""
def decimal_to_binary_iterative(num: int) -> str:
"""
Convert an Integer Decimal Number to a Binary Number as str.
>>> decimal_to_binary_iterative(0)
'0b0'
>>> decimal_to_binary_iterative(2)
'0b10'
>>> decimal_to_binary_iterative(7)
'0b111'
>>> decimal_to_binary_iterative(35)
'0b100011'
>>> # negatives work too
>>> decimal_to_binary_iterative(-2)
'-0b10'
>>> # other floats will error
>>> decimal_to_binary_iterative(16.16) # doctest: +ELLIPSIS
Traceback (most recent call last):
...
TypeError: 'float' object cannot be interpreted as an integer
>>> # strings will error as well
>>> decimal_to_binary_iterative('0xfffff') # doctest: +ELLIPSIS
Traceback (most recent call last):
...
TypeError: 'str' object cannot be interpreted as an integer
"""
if isinstance(num, float):
raise TypeError("'float' object cannot be interpreted as an integer")
if isinstance(num, str):
raise TypeError("'str' object cannot be interpreted as an integer")
if num == 0:
return "0b0"
negative = False
if num < 0:
negative = True
num = -num
binary: list[int] = []
while num > 0:
binary.insert(0, num % 2)
num >>= 1
if negative:
return "-0b" + "".join(str(e) for e in binary)
return "0b" + "".join(str(e) for e in binary)
def decimal_to_binary_recursive_helper(decimal: int) -> str:
"""
Take a positive integer value and return its binary equivalent.
>>> decimal_to_binary_recursive_helper(1000)
'1111101000'
>>> decimal_to_binary_recursive_helper("72")
'1001000'
>>> decimal_to_binary_recursive_helper("number")
Traceback (most recent call last):
...
ValueError: invalid literal for int() with base 10: 'number'
"""
decimal = int(decimal)
if decimal in (0, 1): # Exit cases for the recursion
return str(decimal)
div, mod = divmod(decimal, 2)
return decimal_to_binary_recursive_helper(div) + str(mod)
def decimal_to_binary_recursive(number: str) -> str:
"""
Take an integer value and raise ValueError for wrong inputs,
call the function above and return the output with prefix "0b" & "-0b"
for positive and negative integers respectively.
>>> decimal_to_binary_recursive(0)
'0b0'
>>> decimal_to_binary_recursive(40)
'0b101000'
>>> decimal_to_binary_recursive(-40)
'-0b101000'
>>> decimal_to_binary_recursive(40.8)
Traceback (most recent call last):
...
ValueError: Input value is not an integer
>>> decimal_to_binary_recursive("forty")
Traceback (most recent call last):
...
ValueError: Input value is not an integer
"""
number = str(number).strip()
if not number:
raise ValueError("No input value was provided")
negative = "-" if number.startswith("-") else ""
number = number.lstrip("-")
if not number.isnumeric():
raise ValueError("Input value is not an integer")
return f"{negative}0b{decimal_to_binary_recursive_helper(int(number))}"
if __name__ == "__main__":
import doctest
doctest.testmod()
print(decimal_to_binary_recursive(input("Input a decimal number: ")))
| -1 |
TheAlgorithms/Python | 9,707 | Fix mypy errors in circular_linked_list.py and swap_nodes.py | ### Describe your change:
Fixes #9710
Fix mypy errors in `data_structures/linked_list/circular_linked_list.py` and `data_structures/linked_list/swap_nodes.py` that were let through in #9668. These errors are causing every new PR to fail the pre-commit check.
My fixes primarily consisted of `assert` statements and type hints in order to convince mypy of (implicitly understood) type guarantees.
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
| tianyizheng02 | "2023-10-04T14:51:39Z" | "2023-10-04T16:05:01Z" | 3fd3497f15982a7286326b520b5e7b52767da1f3 | dfdd78135df938d948ba3044aca628aca08886e7 | Fix mypy errors in circular_linked_list.py and swap_nodes.py. ### Describe your change:
Fixes #9710
Fix mypy errors in `data_structures/linked_list/circular_linked_list.py` and `data_structures/linked_list/swap_nodes.py` that were let through in #9668. These errors are causing every new PR to fail the pre-commit check.
My fixes primarily consisted of `assert` statements and type hints in order to convince mypy of (implicitly understood) type guarantees.
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
| -1 |
||
TheAlgorithms/Python | 9,707 | Fix mypy errors in circular_linked_list.py and swap_nodes.py | ### Describe your change:
Fixes #9710
Fix mypy errors in `data_structures/linked_list/circular_linked_list.py` and `data_structures/linked_list/swap_nodes.py` that were let through in #9668. These errors are causing every new PR to fail the pre-commit check.
My fixes primarily consisted of `assert` statements and type hints in order to convince mypy of (implicitly understood) type guarantees.
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
| tianyizheng02 | "2023-10-04T14:51:39Z" | "2023-10-04T16:05:01Z" | 3fd3497f15982a7286326b520b5e7b52767da1f3 | dfdd78135df938d948ba3044aca628aca08886e7 | Fix mypy errors in circular_linked_list.py and swap_nodes.py. ### Describe your change:
Fixes #9710
Fix mypy errors in `data_structures/linked_list/circular_linked_list.py` and `data_structures/linked_list/swap_nodes.py` that were let through in #9668. These errors are causing every new PR to fail the pre-commit check.
My fixes primarily consisted of `assert` statements and type hints in order to convince mypy of (implicitly understood) type guarantees.
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
| """
Name scores
Problem 22
Using names.txt (right click and 'Save Link/Target As...'), a 46K text file
containing over five-thousand first names, begin by sorting it into
alphabetical order. Then working out the alphabetical value for each name,
multiply this value by its alphabetical position in the list to obtain a name
score.
For example, when the list is sorted into alphabetical order, COLIN, which is
worth 3 + 15 + 12 + 9 + 14 = 53, is the 938th name in the list. So, COLIN would
obtain a score of 938 Γ 53 = 49714.
What is the total of all the name scores in the file?
"""
import os
def solution():
"""Returns the total of all the name scores in the file.
>>> solution()
871198282
"""
total_sum = 0
temp_sum = 0
with open(os.path.dirname(__file__) + "/p022_names.txt") as file:
name = str(file.readlines()[0])
name = name.replace('"', "").split(",")
name.sort()
for i in range(len(name)):
for j in name[i]:
temp_sum += ord(j) - ord("A") + 1
total_sum += (i + 1) * temp_sum
temp_sum = 0
return total_sum
if __name__ == "__main__":
print(solution())
| """
Name scores
Problem 22
Using names.txt (right click and 'Save Link/Target As...'), a 46K text file
containing over five-thousand first names, begin by sorting it into
alphabetical order. Then working out the alphabetical value for each name,
multiply this value by its alphabetical position in the list to obtain a name
score.
For example, when the list is sorted into alphabetical order, COLIN, which is
worth 3 + 15 + 12 + 9 + 14 = 53, is the 938th name in the list. So, COLIN would
obtain a score of 938 Γ 53 = 49714.
What is the total of all the name scores in the file?
"""
import os
def solution():
"""Returns the total of all the name scores in the file.
>>> solution()
871198282
"""
total_sum = 0
temp_sum = 0
with open(os.path.dirname(__file__) + "/p022_names.txt") as file:
name = str(file.readlines()[0])
name = name.replace('"', "").split(",")
name.sort()
for i in range(len(name)):
for j in name[i]:
temp_sum += ord(j) - ord("A") + 1
total_sum += (i + 1) * temp_sum
temp_sum = 0
return total_sum
if __name__ == "__main__":
print(solution())
| -1 |
TheAlgorithms/Python | 9,707 | Fix mypy errors in circular_linked_list.py and swap_nodes.py | ### Describe your change:
Fixes #9710
Fix mypy errors in `data_structures/linked_list/circular_linked_list.py` and `data_structures/linked_list/swap_nodes.py` that were let through in #9668. These errors are causing every new PR to fail the pre-commit check.
My fixes primarily consisted of `assert` statements and type hints in order to convince mypy of (implicitly understood) type guarantees.
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
| tianyizheng02 | "2023-10-04T14:51:39Z" | "2023-10-04T16:05:01Z" | 3fd3497f15982a7286326b520b5e7b52767da1f3 | dfdd78135df938d948ba3044aca628aca08886e7 | Fix mypy errors in circular_linked_list.py and swap_nodes.py. ### Describe your change:
Fixes #9710
Fix mypy errors in `data_structures/linked_list/circular_linked_list.py` and `data_structures/linked_list/swap_nodes.py` that were let through in #9668. These errors are causing every new PR to fail the pre-commit check.
My fixes primarily consisted of `assert` statements and type hints in order to convince mypy of (implicitly understood) type guarantees.
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
| """
Implementation of iterative merge sort in Python
Author: Aman Gupta
For doctests run following command:
python3 -m doctest -v iterative_merge_sort.py
For manual testing run:
python3 iterative_merge_sort.py
"""
from __future__ import annotations
def merge(input_list: list, low: int, mid: int, high: int) -> list:
"""
sorting left-half and right-half individually
then merging them into result
"""
result = []
left, right = input_list[low:mid], input_list[mid : high + 1]
while left and right:
result.append((left if left[0] <= right[0] else right).pop(0))
input_list[low : high + 1] = result + left + right
return input_list
# iteration over the unsorted list
def iter_merge_sort(input_list: list) -> list:
"""
Return a sorted copy of the input list
>>> iter_merge_sort([5, 9, 8, 7, 1, 2, 7])
[1, 2, 5, 7, 7, 8, 9]
>>> iter_merge_sort([1])
[1]
>>> iter_merge_sort([2, 1])
[1, 2]
>>> iter_merge_sort([2, 1, 3])
[1, 2, 3]
>>> iter_merge_sort([4, 3, 2, 1])
[1, 2, 3, 4]
>>> iter_merge_sort([5, 4, 3, 2, 1])
[1, 2, 3, 4, 5]
>>> iter_merge_sort(['c', 'b', 'a'])
['a', 'b', 'c']
>>> iter_merge_sort([0.3, 0.2, 0.1])
[0.1, 0.2, 0.3]
>>> iter_merge_sort(['dep', 'dang', 'trai'])
['dang', 'dep', 'trai']
>>> iter_merge_sort([6])
[6]
>>> iter_merge_sort([])
[]
>>> iter_merge_sort([-2, -9, -1, -4])
[-9, -4, -2, -1]
>>> iter_merge_sort([1.1, 1, 0.0, -1, -1.1])
[-1.1, -1, 0.0, 1, 1.1]
>>> iter_merge_sort(['c', 'b', 'a'])
['a', 'b', 'c']
>>> iter_merge_sort('cba')
['a', 'b', 'c']
"""
if len(input_list) <= 1:
return input_list
input_list = list(input_list)
# iteration for two-way merging
p = 2
while p <= len(input_list):
# getting low, high and middle value for merge-sort of single list
for i in range(0, len(input_list), p):
low = i
high = i + p - 1
mid = (low + high + 1) // 2
input_list = merge(input_list, low, mid, high)
# final merge of last two parts
if p * 2 >= len(input_list):
mid = i
input_list = merge(input_list, 0, mid, len(input_list) - 1)
break
p *= 2
return input_list
if __name__ == "__main__":
user_input = input("Enter numbers separated by a comma:\n").strip()
if user_input == "":
unsorted = []
else:
unsorted = [int(item.strip()) for item in user_input.split(",")]
print(iter_merge_sort(unsorted))
| """
Implementation of iterative merge sort in Python
Author: Aman Gupta
For doctests run following command:
python3 -m doctest -v iterative_merge_sort.py
For manual testing run:
python3 iterative_merge_sort.py
"""
from __future__ import annotations
def merge(input_list: list, low: int, mid: int, high: int) -> list:
"""
sorting left-half and right-half individually
then merging them into result
"""
result = []
left, right = input_list[low:mid], input_list[mid : high + 1]
while left and right:
result.append((left if left[0] <= right[0] else right).pop(0))
input_list[low : high + 1] = result + left + right
return input_list
# iteration over the unsorted list
def iter_merge_sort(input_list: list) -> list:
"""
Return a sorted copy of the input list
>>> iter_merge_sort([5, 9, 8, 7, 1, 2, 7])
[1, 2, 5, 7, 7, 8, 9]
>>> iter_merge_sort([1])
[1]
>>> iter_merge_sort([2, 1])
[1, 2]
>>> iter_merge_sort([2, 1, 3])
[1, 2, 3]
>>> iter_merge_sort([4, 3, 2, 1])
[1, 2, 3, 4]
>>> iter_merge_sort([5, 4, 3, 2, 1])
[1, 2, 3, 4, 5]
>>> iter_merge_sort(['c', 'b', 'a'])
['a', 'b', 'c']
>>> iter_merge_sort([0.3, 0.2, 0.1])
[0.1, 0.2, 0.3]
>>> iter_merge_sort(['dep', 'dang', 'trai'])
['dang', 'dep', 'trai']
>>> iter_merge_sort([6])
[6]
>>> iter_merge_sort([])
[]
>>> iter_merge_sort([-2, -9, -1, -4])
[-9, -4, -2, -1]
>>> iter_merge_sort([1.1, 1, 0.0, -1, -1.1])
[-1.1, -1, 0.0, 1, 1.1]
>>> iter_merge_sort(['c', 'b', 'a'])
['a', 'b', 'c']
>>> iter_merge_sort('cba')
['a', 'b', 'c']
"""
if len(input_list) <= 1:
return input_list
input_list = list(input_list)
# iteration for two-way merging
p = 2
while p <= len(input_list):
# getting low, high and middle value for merge-sort of single list
for i in range(0, len(input_list), p):
low = i
high = i + p - 1
mid = (low + high + 1) // 2
input_list = merge(input_list, low, mid, high)
# final merge of last two parts
if p * 2 >= len(input_list):
mid = i
input_list = merge(input_list, 0, mid, len(input_list) - 1)
break
p *= 2
return input_list
if __name__ == "__main__":
user_input = input("Enter numbers separated by a comma:\n").strip()
if user_input == "":
unsorted = []
else:
unsorted = [int(item.strip()) for item in user_input.split(",")]
print(iter_merge_sort(unsorted))
| -1 |
TheAlgorithms/Python | 9,707 | Fix mypy errors in circular_linked_list.py and swap_nodes.py | ### Describe your change:
Fixes #9710
Fix mypy errors in `data_structures/linked_list/circular_linked_list.py` and `data_structures/linked_list/swap_nodes.py` that were let through in #9668. These errors are causing every new PR to fail the pre-commit check.
My fixes primarily consisted of `assert` statements and type hints in order to convince mypy of (implicitly understood) type guarantees.
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
| tianyizheng02 | "2023-10-04T14:51:39Z" | "2023-10-04T16:05:01Z" | 3fd3497f15982a7286326b520b5e7b52767da1f3 | dfdd78135df938d948ba3044aca628aca08886e7 | Fix mypy errors in circular_linked_list.py and swap_nodes.py. ### Describe your change:
Fixes #9710
Fix mypy errors in `data_structures/linked_list/circular_linked_list.py` and `data_structures/linked_list/swap_nodes.py` that were let through in #9668. These errors are causing every new PR to fail the pre-commit check.
My fixes primarily consisted of `assert` statements and type hints in order to convince mypy of (implicitly understood) type guarantees.
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
| """
Approximates the area under the curve using the trapezoidal rule
"""
from __future__ import annotations
from collections.abc import Callable
def trapezoidal_area(
fnc: Callable[[float], float],
x_start: float,
x_end: float,
steps: int = 100,
) -> float:
"""
Treats curve as a collection of linear lines and sums the area of the
trapezium shape they form
:param fnc: a function which defines a curve
:param x_start: left end point to indicate the start of line segment
:param x_end: right end point to indicate end of line segment
:param steps: an accuracy gauge; more steps increases the accuracy
:return: a float representing the length of the curve
>>> def f(x):
... return 5
>>> '%.3f' % trapezoidal_area(f, 12.0, 14.0, 1000)
'10.000'
>>> def f(x):
... return 9*x**2
>>> '%.4f' % trapezoidal_area(f, -4.0, 0, 10000)
'192.0000'
>>> '%.4f' % trapezoidal_area(f, -4.0, 4.0, 10000)
'384.0000'
"""
x1 = x_start
fx1 = fnc(x_start)
area = 0.0
for _ in range(steps):
# Approximates small segments of curve as linear and solve
# for trapezoidal area
x2 = (x_end - x_start) / steps + x1
fx2 = fnc(x2)
area += abs(fx2 + fx1) * (x2 - x1) / 2
# Increment step
x1 = x2
fx1 = fx2
return area
if __name__ == "__main__":
def f(x):
return x**3
print("f(x) = x^3")
print("The area between the curve, x = -10, x = 10 and the x axis is:")
i = 10
while i <= 100000:
area = trapezoidal_area(f, -5, 5, i)
print(f"with {i} steps: {area}")
i *= 10
| """
Approximates the area under the curve using the trapezoidal rule
"""
from __future__ import annotations
from collections.abc import Callable
def trapezoidal_area(
fnc: Callable[[float], float],
x_start: float,
x_end: float,
steps: int = 100,
) -> float:
"""
Treats curve as a collection of linear lines and sums the area of the
trapezium shape they form
:param fnc: a function which defines a curve
:param x_start: left end point to indicate the start of line segment
:param x_end: right end point to indicate end of line segment
:param steps: an accuracy gauge; more steps increases the accuracy
:return: a float representing the length of the curve
>>> def f(x):
... return 5
>>> '%.3f' % trapezoidal_area(f, 12.0, 14.0, 1000)
'10.000'
>>> def f(x):
... return 9*x**2
>>> '%.4f' % trapezoidal_area(f, -4.0, 0, 10000)
'192.0000'
>>> '%.4f' % trapezoidal_area(f, -4.0, 4.0, 10000)
'384.0000'
"""
x1 = x_start
fx1 = fnc(x_start)
area = 0.0
for _ in range(steps):
# Approximates small segments of curve as linear and solve
# for trapezoidal area
x2 = (x_end - x_start) / steps + x1
fx2 = fnc(x2)
area += abs(fx2 + fx1) * (x2 - x1) / 2
# Increment step
x1 = x2
fx1 = fx2
return area
if __name__ == "__main__":
def f(x):
return x**3
print("f(x) = x^3")
print("The area between the curve, x = -10, x = 10 and the x axis is:")
i = 10
while i <= 100000:
area = trapezoidal_area(f, -5, 5, i)
print(f"with {i} steps: {area}")
i *= 10
| -1 |
TheAlgorithms/Python | 9,707 | Fix mypy errors in circular_linked_list.py and swap_nodes.py | ### Describe your change:
Fixes #9710
Fix mypy errors in `data_structures/linked_list/circular_linked_list.py` and `data_structures/linked_list/swap_nodes.py` that were let through in #9668. These errors are causing every new PR to fail the pre-commit check.
My fixes primarily consisted of `assert` statements and type hints in order to convince mypy of (implicitly understood) type guarantees.
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
| tianyizheng02 | "2023-10-04T14:51:39Z" | "2023-10-04T16:05:01Z" | 3fd3497f15982a7286326b520b5e7b52767da1f3 | dfdd78135df938d948ba3044aca628aca08886e7 | Fix mypy errors in circular_linked_list.py and swap_nodes.py. ### Describe your change:
Fixes #9710
Fix mypy errors in `data_structures/linked_list/circular_linked_list.py` and `data_structures/linked_list/swap_nodes.py` that were let through in #9668. These errors are causing every new PR to fail the pre-commit check.
My fixes primarily consisted of `assert` statements and type hints in order to convince mypy of (implicitly understood) type guarantees.
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
| #!/usr/bin/python
# Logistic Regression from scratch
# In[62]:
# In[63]:
# importing all the required libraries
"""
Implementing logistic regression for classification problem
Helpful resources:
Coursera ML course
https://medium.com/@martinpella/logistic-regression-from-scratch-in-python-124c5636b8ac
"""
import numpy as np
from matplotlib import pyplot as plt
from sklearn import datasets
# get_ipython().run_line_magic('matplotlib', 'inline')
# In[67]:
# sigmoid function or logistic function is used as a hypothesis function in
# classification problems
def sigmoid_function(z):
return 1 / (1 + np.exp(-z))
def cost_function(h, y):
return (-y * np.log(h) - (1 - y) * np.log(1 - h)).mean()
def log_likelihood(x, y, weights):
scores = np.dot(x, weights)
return np.sum(y * scores - np.log(1 + np.exp(scores)))
# here alpha is the learning rate, X is the feature matrix,y is the target matrix
def logistic_reg(alpha, x, y, max_iterations=70000):
theta = np.zeros(x.shape[1])
for iterations in range(max_iterations):
z = np.dot(x, theta)
h = sigmoid_function(z)
gradient = np.dot(x.T, h - y) / y.size
theta = theta - alpha * gradient # updating the weights
z = np.dot(x, theta)
h = sigmoid_function(z)
j = cost_function(h, y)
if iterations % 100 == 0:
print(f"loss: {j} \t") # printing the loss after every 100 iterations
return theta
# In[68]:
if __name__ == "__main__":
iris = datasets.load_iris()
x = iris.data[:, :2]
y = (iris.target != 0) * 1
alpha = 0.1
theta = logistic_reg(alpha, x, y, max_iterations=70000)
print("theta: ", theta) # printing the theta i.e our weights vector
def predict_prob(x):
return sigmoid_function(
np.dot(x, theta)
) # predicting the value of probability from the logistic regression algorithm
plt.figure(figsize=(10, 6))
plt.scatter(x[y == 0][:, 0], x[y == 0][:, 1], color="b", label="0")
plt.scatter(x[y == 1][:, 0], x[y == 1][:, 1], color="r", label="1")
(x1_min, x1_max) = (x[:, 0].min(), x[:, 0].max())
(x2_min, x2_max) = (x[:, 1].min(), x[:, 1].max())
(xx1, xx2) = np.meshgrid(np.linspace(x1_min, x1_max), np.linspace(x2_min, x2_max))
grid = np.c_[xx1.ravel(), xx2.ravel()]
probs = predict_prob(grid).reshape(xx1.shape)
plt.contour(xx1, xx2, probs, [0.5], linewidths=1, colors="black")
plt.legend()
plt.show()
| #!/usr/bin/python
# Logistic Regression from scratch
# In[62]:
# In[63]:
# importing all the required libraries
"""
Implementing logistic regression for classification problem
Helpful resources:
Coursera ML course
https://medium.com/@martinpella/logistic-regression-from-scratch-in-python-124c5636b8ac
"""
import numpy as np
from matplotlib import pyplot as plt
from sklearn import datasets
# get_ipython().run_line_magic('matplotlib', 'inline')
# In[67]:
# sigmoid function or logistic function is used as a hypothesis function in
# classification problems
def sigmoid_function(z):
return 1 / (1 + np.exp(-z))
def cost_function(h, y):
return (-y * np.log(h) - (1 - y) * np.log(1 - h)).mean()
def log_likelihood(x, y, weights):
scores = np.dot(x, weights)
return np.sum(y * scores - np.log(1 + np.exp(scores)))
# here alpha is the learning rate, X is the feature matrix,y is the target matrix
def logistic_reg(alpha, x, y, max_iterations=70000):
theta = np.zeros(x.shape[1])
for iterations in range(max_iterations):
z = np.dot(x, theta)
h = sigmoid_function(z)
gradient = np.dot(x.T, h - y) / y.size
theta = theta - alpha * gradient # updating the weights
z = np.dot(x, theta)
h = sigmoid_function(z)
j = cost_function(h, y)
if iterations % 100 == 0:
print(f"loss: {j} \t") # printing the loss after every 100 iterations
return theta
# In[68]:
if __name__ == "__main__":
iris = datasets.load_iris()
x = iris.data[:, :2]
y = (iris.target != 0) * 1
alpha = 0.1
theta = logistic_reg(alpha, x, y, max_iterations=70000)
print("theta: ", theta) # printing the theta i.e our weights vector
def predict_prob(x):
return sigmoid_function(
np.dot(x, theta)
) # predicting the value of probability from the logistic regression algorithm
plt.figure(figsize=(10, 6))
plt.scatter(x[y == 0][:, 0], x[y == 0][:, 1], color="b", label="0")
plt.scatter(x[y == 1][:, 0], x[y == 1][:, 1], color="r", label="1")
(x1_min, x1_max) = (x[:, 0].min(), x[:, 0].max())
(x2_min, x2_max) = (x[:, 1].min(), x[:, 1].max())
(xx1, xx2) = np.meshgrid(np.linspace(x1_min, x1_max), np.linspace(x2_min, x2_max))
grid = np.c_[xx1.ravel(), xx2.ravel()]
probs = predict_prob(grid).reshape(xx1.shape)
plt.contour(xx1, xx2, probs, [0.5], linewidths=1, colors="black")
plt.legend()
plt.show()
| -1 |
TheAlgorithms/Python | 9,707 | Fix mypy errors in circular_linked_list.py and swap_nodes.py | ### Describe your change:
Fixes #9710
Fix mypy errors in `data_structures/linked_list/circular_linked_list.py` and `data_structures/linked_list/swap_nodes.py` that were let through in #9668. These errors are causing every new PR to fail the pre-commit check.
My fixes primarily consisted of `assert` statements and type hints in order to convince mypy of (implicitly understood) type guarantees.
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
| tianyizheng02 | "2023-10-04T14:51:39Z" | "2023-10-04T16:05:01Z" | 3fd3497f15982a7286326b520b5e7b52767da1f3 | dfdd78135df938d948ba3044aca628aca08886e7 | Fix mypy errors in circular_linked_list.py and swap_nodes.py. ### Describe your change:
Fixes #9710
Fix mypy errors in `data_structures/linked_list/circular_linked_list.py` and `data_structures/linked_list/swap_nodes.py` that were let through in #9668. These errors are causing every new PR to fail the pre-commit check.
My fixes primarily consisted of `assert` statements and type hints in order to convince mypy of (implicitly understood) type guarantees.
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
| from __future__ import annotations
import typing
from collections.abc import Iterable
import numpy as np
Vector = typing.Union[Iterable[float], Iterable[int], np.ndarray] # noqa: UP007
VectorOut = typing.Union[np.float64, int, float] # noqa: UP007
def euclidean_distance(vector_1: Vector, vector_2: Vector) -> VectorOut:
"""
Calculate the distance between the two endpoints of two vectors.
A vector is defined as a list, tuple, or numpy 1D array.
>>> euclidean_distance((0, 0), (2, 2))
2.8284271247461903
>>> euclidean_distance(np.array([0, 0, 0]), np.array([2, 2, 2]))
3.4641016151377544
>>> euclidean_distance(np.array([1, 2, 3, 4]), np.array([5, 6, 7, 8]))
8.0
>>> euclidean_distance([1, 2, 3, 4], [5, 6, 7, 8])
8.0
"""
return np.sqrt(np.sum((np.asarray(vector_1) - np.asarray(vector_2)) ** 2))
def euclidean_distance_no_np(vector_1: Vector, vector_2: Vector) -> VectorOut:
"""
Calculate the distance between the two endpoints of two vectors without numpy.
A vector is defined as a list, tuple, or numpy 1D array.
>>> euclidean_distance_no_np((0, 0), (2, 2))
2.8284271247461903
>>> euclidean_distance_no_np([1, 2, 3, 4], [5, 6, 7, 8])
8.0
"""
return sum((v1 - v2) ** 2 for v1, v2 in zip(vector_1, vector_2)) ** (1 / 2)
if __name__ == "__main__":
def benchmark() -> None:
"""
Benchmarks
"""
from timeit import timeit
print("Without Numpy")
print(
timeit(
"euclidean_distance_no_np([1, 2, 3], [4, 5, 6])",
number=10000,
globals=globals(),
)
)
print("With Numpy")
print(
timeit(
"euclidean_distance([1, 2, 3], [4, 5, 6])",
number=10000,
globals=globals(),
)
)
benchmark()
| from __future__ import annotations
import typing
from collections.abc import Iterable
import numpy as np
Vector = typing.Union[Iterable[float], Iterable[int], np.ndarray] # noqa: UP007
VectorOut = typing.Union[np.float64, int, float] # noqa: UP007
def euclidean_distance(vector_1: Vector, vector_2: Vector) -> VectorOut:
"""
Calculate the distance between the two endpoints of two vectors.
A vector is defined as a list, tuple, or numpy 1D array.
>>> euclidean_distance((0, 0), (2, 2))
2.8284271247461903
>>> euclidean_distance(np.array([0, 0, 0]), np.array([2, 2, 2]))
3.4641016151377544
>>> euclidean_distance(np.array([1, 2, 3, 4]), np.array([5, 6, 7, 8]))
8.0
>>> euclidean_distance([1, 2, 3, 4], [5, 6, 7, 8])
8.0
"""
return np.sqrt(np.sum((np.asarray(vector_1) - np.asarray(vector_2)) ** 2))
def euclidean_distance_no_np(vector_1: Vector, vector_2: Vector) -> VectorOut:
"""
Calculate the distance between the two endpoints of two vectors without numpy.
A vector is defined as a list, tuple, or numpy 1D array.
>>> euclidean_distance_no_np((0, 0), (2, 2))
2.8284271247461903
>>> euclidean_distance_no_np([1, 2, 3, 4], [5, 6, 7, 8])
8.0
"""
return sum((v1 - v2) ** 2 for v1, v2 in zip(vector_1, vector_2)) ** (1 / 2)
if __name__ == "__main__":
def benchmark() -> None:
"""
Benchmarks
"""
from timeit import timeit
print("Without Numpy")
print(
timeit(
"euclidean_distance_no_np([1, 2, 3], [4, 5, 6])",
number=10000,
globals=globals(),
)
)
print("With Numpy")
print(
timeit(
"euclidean_distance([1, 2, 3], [4, 5, 6])",
number=10000,
globals=globals(),
)
)
benchmark()
| -1 |
TheAlgorithms/Python | 9,707 | Fix mypy errors in circular_linked_list.py and swap_nodes.py | ### Describe your change:
Fixes #9710
Fix mypy errors in `data_structures/linked_list/circular_linked_list.py` and `data_structures/linked_list/swap_nodes.py` that were let through in #9668. These errors are causing every new PR to fail the pre-commit check.
My fixes primarily consisted of `assert` statements and type hints in order to convince mypy of (implicitly understood) type guarantees.
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
| tianyizheng02 | "2023-10-04T14:51:39Z" | "2023-10-04T16:05:01Z" | 3fd3497f15982a7286326b520b5e7b52767da1f3 | dfdd78135df938d948ba3044aca628aca08886e7 | Fix mypy errors in circular_linked_list.py and swap_nodes.py. ### Describe your change:
Fixes #9710
Fix mypy errors in `data_structures/linked_list/circular_linked_list.py` and `data_structures/linked_list/swap_nodes.py` that were let through in #9668. These errors are causing every new PR to fail the pre-commit check.
My fixes primarily consisted of `assert` statements and type hints in order to convince mypy of (implicitly understood) type guarantees.
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
| from __future__ import annotations
class Graph:
def __init__(self, vertices: int) -> None:
"""
>>> graph = Graph(2)
>>> graph.vertices
2
>>> len(graph.graph)
2
>>> len(graph.graph[0])
2
"""
self.vertices = vertices
self.graph = [[0] * vertices for _ in range(vertices)]
def print_solution(self, distances_from_source: list[int]) -> None:
"""
>>> Graph(0).print_solution([]) # doctest: +NORMALIZE_WHITESPACE
Vertex Distance from Source
"""
print("Vertex \t Distance from Source")
for vertex in range(self.vertices):
print(vertex, "\t\t", distances_from_source[vertex])
def minimum_distance(
self, distances_from_source: list[int], visited: list[bool]
) -> int:
"""
A utility function to find the vertex with minimum distance value, from the set
of vertices not yet included in shortest path tree.
>>> Graph(3).minimum_distance([1, 2, 3], [False, False, True])
0
"""
# Initialize minimum distance for next node
minimum = 1e7
min_index = 0
# Search not nearest vertex not in the shortest path tree
for vertex in range(self.vertices):
if distances_from_source[vertex] < minimum and visited[vertex] is False:
minimum = distances_from_source[vertex]
min_index = vertex
return min_index
def dijkstra(self, source: int) -> None:
"""
Function that implements Dijkstra's single source shortest path algorithm for a
graph represented using adjacency matrix representation.
>>> Graph(4).dijkstra(1) # doctest: +NORMALIZE_WHITESPACE
Vertex Distance from Source
0 10000000
1 0
2 10000000
3 10000000
"""
distances = [int(1e7)] * self.vertices # distances from the source
distances[source] = 0
visited = [False] * self.vertices
for _ in range(self.vertices):
u = self.minimum_distance(distances, visited)
visited[u] = True
# Update dist value of the adjacent vertices
# of the picked vertex only if the current
# distance is greater than new distance and
# the vertex in not in the shortest path tree
for v in range(self.vertices):
if (
self.graph[u][v] > 0
and visited[v] is False
and distances[v] > distances[u] + self.graph[u][v]
):
distances[v] = distances[u] + self.graph[u][v]
self.print_solution(distances)
if __name__ == "__main__":
graph = Graph(9)
graph.graph = [
[0, 4, 0, 0, 0, 0, 0, 8, 0],
[4, 0, 8, 0, 0, 0, 0, 11, 0],
[0, 8, 0, 7, 0, 4, 0, 0, 2],
[0, 0, 7, 0, 9, 14, 0, 0, 0],
[0, 0, 0, 9, 0, 10, 0, 0, 0],
[0, 0, 4, 14, 10, 0, 2, 0, 0],
[0, 0, 0, 0, 0, 2, 0, 1, 6],
[8, 11, 0, 0, 0, 0, 1, 0, 7],
[0, 0, 2, 0, 0, 0, 6, 7, 0],
]
graph.dijkstra(0)
| from __future__ import annotations
class Graph:
def __init__(self, vertices: int) -> None:
"""
>>> graph = Graph(2)
>>> graph.vertices
2
>>> len(graph.graph)
2
>>> len(graph.graph[0])
2
"""
self.vertices = vertices
self.graph = [[0] * vertices for _ in range(vertices)]
def print_solution(self, distances_from_source: list[int]) -> None:
"""
>>> Graph(0).print_solution([]) # doctest: +NORMALIZE_WHITESPACE
Vertex Distance from Source
"""
print("Vertex \t Distance from Source")
for vertex in range(self.vertices):
print(vertex, "\t\t", distances_from_source[vertex])
def minimum_distance(
self, distances_from_source: list[int], visited: list[bool]
) -> int:
"""
A utility function to find the vertex with minimum distance value, from the set
of vertices not yet included in shortest path tree.
>>> Graph(3).minimum_distance([1, 2, 3], [False, False, True])
0
"""
# Initialize minimum distance for next node
minimum = 1e7
min_index = 0
# Search not nearest vertex not in the shortest path tree
for vertex in range(self.vertices):
if distances_from_source[vertex] < minimum and visited[vertex] is False:
minimum = distances_from_source[vertex]
min_index = vertex
return min_index
def dijkstra(self, source: int) -> None:
"""
Function that implements Dijkstra's single source shortest path algorithm for a
graph represented using adjacency matrix representation.
>>> Graph(4).dijkstra(1) # doctest: +NORMALIZE_WHITESPACE
Vertex Distance from Source
0 10000000
1 0
2 10000000
3 10000000
"""
distances = [int(1e7)] * self.vertices # distances from the source
distances[source] = 0
visited = [False] * self.vertices
for _ in range(self.vertices):
u = self.minimum_distance(distances, visited)
visited[u] = True
# Update dist value of the adjacent vertices
# of the picked vertex only if the current
# distance is greater than new distance and
# the vertex in not in the shortest path tree
for v in range(self.vertices):
if (
self.graph[u][v] > 0
and visited[v] is False
and distances[v] > distances[u] + self.graph[u][v]
):
distances[v] = distances[u] + self.graph[u][v]
self.print_solution(distances)
if __name__ == "__main__":
graph = Graph(9)
graph.graph = [
[0, 4, 0, 0, 0, 0, 0, 8, 0],
[4, 0, 8, 0, 0, 0, 0, 11, 0],
[0, 8, 0, 7, 0, 4, 0, 0, 2],
[0, 0, 7, 0, 9, 14, 0, 0, 0],
[0, 0, 0, 9, 0, 10, 0, 0, 0],
[0, 0, 4, 14, 10, 0, 2, 0, 0],
[0, 0, 0, 0, 0, 2, 0, 1, 6],
[8, 11, 0, 0, 0, 0, 1, 0, 7],
[0, 0, 2, 0, 0, 0, 6, 7, 0],
]
graph.dijkstra(0)
| -1 |
TheAlgorithms/Python | 9,707 | Fix mypy errors in circular_linked_list.py and swap_nodes.py | ### Describe your change:
Fixes #9710
Fix mypy errors in `data_structures/linked_list/circular_linked_list.py` and `data_structures/linked_list/swap_nodes.py` that were let through in #9668. These errors are causing every new PR to fail the pre-commit check.
My fixes primarily consisted of `assert` statements and type hints in order to convince mypy of (implicitly understood) type guarantees.
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
| tianyizheng02 | "2023-10-04T14:51:39Z" | "2023-10-04T16:05:01Z" | 3fd3497f15982a7286326b520b5e7b52767da1f3 | dfdd78135df938d948ba3044aca628aca08886e7 | Fix mypy errors in circular_linked_list.py and swap_nodes.py. ### Describe your change:
Fixes #9710
Fix mypy errors in `data_structures/linked_list/circular_linked_list.py` and `data_structures/linked_list/swap_nodes.py` that were let through in #9668. These errors are causing every new PR to fail the pre-commit check.
My fixes primarily consisted of `assert` statements and type hints in order to convince mypy of (implicitly understood) type guarantees.
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
| # Created by sarathkaul on 12/11/19
import requests
_NEWS_API = "https://newsapi.org/v1/articles?source=bbc-news&sortBy=top&apiKey="
def fetch_bbc_news(bbc_news_api_key: str) -> None:
# fetching a list of articles in json format
bbc_news_page = requests.get(_NEWS_API + bbc_news_api_key).json()
# each article in the list is a dict
for i, article in enumerate(bbc_news_page["articles"], 1):
print(f"{i}.) {article['title']}")
if __name__ == "__main__":
fetch_bbc_news(bbc_news_api_key="<Your BBC News API key goes here>")
| # Created by sarathkaul on 12/11/19
import requests
_NEWS_API = "https://newsapi.org/v1/articles?source=bbc-news&sortBy=top&apiKey="
def fetch_bbc_news(bbc_news_api_key: str) -> None:
# fetching a list of articles in json format
bbc_news_page = requests.get(_NEWS_API + bbc_news_api_key).json()
# each article in the list is a dict
for i, article in enumerate(bbc_news_page["articles"], 1):
print(f"{i}.) {article['title']}")
if __name__ == "__main__":
fetch_bbc_news(bbc_news_api_key="<Your BBC News API key goes here>")
| -1 |
TheAlgorithms/Python | 9,707 | Fix mypy errors in circular_linked_list.py and swap_nodes.py | ### Describe your change:
Fixes #9710
Fix mypy errors in `data_structures/linked_list/circular_linked_list.py` and `data_structures/linked_list/swap_nodes.py` that were let through in #9668. These errors are causing every new PR to fail the pre-commit check.
My fixes primarily consisted of `assert` statements and type hints in order to convince mypy of (implicitly understood) type guarantees.
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
| tianyizheng02 | "2023-10-04T14:51:39Z" | "2023-10-04T16:05:01Z" | 3fd3497f15982a7286326b520b5e7b52767da1f3 | dfdd78135df938d948ba3044aca628aca08886e7 | Fix mypy errors in circular_linked_list.py and swap_nodes.py. ### Describe your change:
Fixes #9710
Fix mypy errors in `data_structures/linked_list/circular_linked_list.py` and `data_structures/linked_list/swap_nodes.py` that were let through in #9668. These errors are causing every new PR to fail the pre-commit check.
My fixes primarily consisted of `assert` statements and type hints in order to convince mypy of (implicitly understood) type guarantees.
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
| # Bit manipulation
Bit manipulation is the act of manipulating bits to detect errors (hamming code), encrypts and decrypts messages (more on that in the 'ciphers' folder) or just do anything at the lowest level of your computer.
* <https://en.wikipedia.org/wiki/Bit_manipulation>
* <https://docs.python.org/3/reference/expressions.html#binary-bitwise-operations>
* <https://docs.python.org/3/reference/expressions.html#unary-arithmetic-and-bitwise-operations>
* <https://docs.python.org/3/library/stdtypes.html#bitwise-operations-on-integer-types>
* <https://wiki.python.org/moin/BitManipulation>
* <https://wiki.python.org/moin/BitwiseOperators>
* <https://www.tutorialspoint.com/python3/bitwise_operators_example.htm>
| # Bit manipulation
Bit manipulation is the act of manipulating bits to detect errors (hamming code), encrypts and decrypts messages (more on that in the 'ciphers' folder) or just do anything at the lowest level of your computer.
* <https://en.wikipedia.org/wiki/Bit_manipulation>
* <https://docs.python.org/3/reference/expressions.html#binary-bitwise-operations>
* <https://docs.python.org/3/reference/expressions.html#unary-arithmetic-and-bitwise-operations>
* <https://docs.python.org/3/library/stdtypes.html#bitwise-operations-on-integer-types>
* <https://wiki.python.org/moin/BitManipulation>
* <https://wiki.python.org/moin/BitwiseOperators>
* <https://www.tutorialspoint.com/python3/bitwise_operators_example.htm>
| -1 |
TheAlgorithms/Python | 9,707 | Fix mypy errors in circular_linked_list.py and swap_nodes.py | ### Describe your change:
Fixes #9710
Fix mypy errors in `data_structures/linked_list/circular_linked_list.py` and `data_structures/linked_list/swap_nodes.py` that were let through in #9668. These errors are causing every new PR to fail the pre-commit check.
My fixes primarily consisted of `assert` statements and type hints in order to convince mypy of (implicitly understood) type guarantees.
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
| tianyizheng02 | "2023-10-04T14:51:39Z" | "2023-10-04T16:05:01Z" | 3fd3497f15982a7286326b520b5e7b52767da1f3 | dfdd78135df938d948ba3044aca628aca08886e7 | Fix mypy errors in circular_linked_list.py and swap_nodes.py. ### Describe your change:
Fixes #9710
Fix mypy errors in `data_structures/linked_list/circular_linked_list.py` and `data_structures/linked_list/swap_nodes.py` that were let through in #9668. These errors are causing every new PR to fail the pre-commit check.
My fixes primarily consisted of `assert` statements and type hints in order to convince mypy of (implicitly understood) type guarantees.
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
| MIT License
Copyright (c) 2016-2022 TheAlgorithms and contributors
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.
| MIT License
Copyright (c) 2016-2022 TheAlgorithms and contributors
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.
| -1 |
TheAlgorithms/Python | 9,707 | Fix mypy errors in circular_linked_list.py and swap_nodes.py | ### Describe your change:
Fixes #9710
Fix mypy errors in `data_structures/linked_list/circular_linked_list.py` and `data_structures/linked_list/swap_nodes.py` that were let through in #9668. These errors are causing every new PR to fail the pre-commit check.
My fixes primarily consisted of `assert` statements and type hints in order to convince mypy of (implicitly understood) type guarantees.
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
| tianyizheng02 | "2023-10-04T14:51:39Z" | "2023-10-04T16:05:01Z" | 3fd3497f15982a7286326b520b5e7b52767da1f3 | dfdd78135df938d948ba3044aca628aca08886e7 | Fix mypy errors in circular_linked_list.py and swap_nodes.py. ### Describe your change:
Fixes #9710
Fix mypy errors in `data_structures/linked_list/circular_linked_list.py` and `data_structures/linked_list/swap_nodes.py` that were let through in #9668. These errors are causing every new PR to fail the pre-commit check.
My fixes primarily consisted of `assert` statements and type hints in order to convince mypy of (implicitly understood) type guarantees.
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
| -1 |
||
TheAlgorithms/Python | 9,707 | Fix mypy errors in circular_linked_list.py and swap_nodes.py | ### Describe your change:
Fixes #9710
Fix mypy errors in `data_structures/linked_list/circular_linked_list.py` and `data_structures/linked_list/swap_nodes.py` that were let through in #9668. These errors are causing every new PR to fail the pre-commit check.
My fixes primarily consisted of `assert` statements and type hints in order to convince mypy of (implicitly understood) type guarantees.
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
| tianyizheng02 | "2023-10-04T14:51:39Z" | "2023-10-04T16:05:01Z" | 3fd3497f15982a7286326b520b5e7b52767da1f3 | dfdd78135df938d948ba3044aca628aca08886e7 | Fix mypy errors in circular_linked_list.py and swap_nodes.py. ### Describe your change:
Fixes #9710
Fix mypy errors in `data_structures/linked_list/circular_linked_list.py` and `data_structures/linked_list/swap_nodes.py` that were let through in #9668. These errors are causing every new PR to fail the pre-commit check.
My fixes primarily consisted of `assert` statements and type hints in order to convince mypy of (implicitly understood) type guarantees.
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
| # Sorting Algorithms
Sorting is the process of putting data in a specific order. The way to arrange data in a specific order
is specified by the sorting algorithm. The most typical orders are lexical or numerical. The significance
of sorting lies in the fact that, if data is stored in a sorted manner, data searching can be highly optimised.
Another use for sorting is to represent data in a more readable manner.
This section contains a lot of important algorithms that help us to use sorting algorithms in various scenarios.
## References
* <https://www.tutorialspoint.com/python_data_structure/python_sorting_algorithms.htm>
* <https://www.geeksforgeeks.org/sorting-algorithms-in-python>
* <https://realpython.com/sorting-algorithms-python>
| # Sorting Algorithms
Sorting is the process of putting data in a specific order. The way to arrange data in a specific order
is specified by the sorting algorithm. The most typical orders are lexical or numerical. The significance
of sorting lies in the fact that, if data is stored in a sorted manner, data searching can be highly optimised.
Another use for sorting is to represent data in a more readable manner.
This section contains a lot of important algorithms that help us to use sorting algorithms in various scenarios.
## References
* <https://www.tutorialspoint.com/python_data_structure/python_sorting_algorithms.htm>
* <https://www.geeksforgeeks.org/sorting-algorithms-in-python>
* <https://realpython.com/sorting-algorithms-python>
| -1 |
TheAlgorithms/Python | 9,707 | Fix mypy errors in circular_linked_list.py and swap_nodes.py | ### Describe your change:
Fixes #9710
Fix mypy errors in `data_structures/linked_list/circular_linked_list.py` and `data_structures/linked_list/swap_nodes.py` that were let through in #9668. These errors are causing every new PR to fail the pre-commit check.
My fixes primarily consisted of `assert` statements and type hints in order to convince mypy of (implicitly understood) type guarantees.
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
| tianyizheng02 | "2023-10-04T14:51:39Z" | "2023-10-04T16:05:01Z" | 3fd3497f15982a7286326b520b5e7b52767da1f3 | dfdd78135df938d948ba3044aca628aca08886e7 | Fix mypy errors in circular_linked_list.py and swap_nodes.py. ### Describe your change:
Fixes #9710
Fix mypy errors in `data_structures/linked_list/circular_linked_list.py` and `data_structures/linked_list/swap_nodes.py` that were let through in #9668. These errors are causing every new PR to fail the pre-commit check.
My fixes primarily consisted of `assert` statements and type hints in order to convince mypy of (implicitly understood) type guarantees.
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
| -1 |
||
TheAlgorithms/Python | 9,707 | Fix mypy errors in circular_linked_list.py and swap_nodes.py | ### Describe your change:
Fixes #9710
Fix mypy errors in `data_structures/linked_list/circular_linked_list.py` and `data_structures/linked_list/swap_nodes.py` that were let through in #9668. These errors are causing every new PR to fail the pre-commit check.
My fixes primarily consisted of `assert` statements and type hints in order to convince mypy of (implicitly understood) type guarantees.
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
| tianyizheng02 | "2023-10-04T14:51:39Z" | "2023-10-04T16:05:01Z" | 3fd3497f15982a7286326b520b5e7b52767da1f3 | dfdd78135df938d948ba3044aca628aca08886e7 | Fix mypy errors in circular_linked_list.py and swap_nodes.py. ### Describe your change:
Fixes #9710
Fix mypy errors in `data_structures/linked_list/circular_linked_list.py` and `data_structures/linked_list/swap_nodes.py` that were let through in #9668. These errors are causing every new PR to fail the pre-commit check.
My fixes primarily consisted of `assert` statements and type hints in order to convince mypy of (implicitly understood) type guarantees.
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
| """Implementation of Basic Math in Python."""
import math
def prime_factors(n: int) -> list:
"""Find Prime Factors.
>>> prime_factors(100)
[2, 2, 5, 5]
>>> prime_factors(0)
Traceback (most recent call last):
...
ValueError: Only positive integers have prime factors
>>> prime_factors(-10)
Traceback (most recent call last):
...
ValueError: Only positive integers have prime factors
"""
if n <= 0:
raise ValueError("Only positive integers have prime factors")
pf = []
while n % 2 == 0:
pf.append(2)
n = int(n / 2)
for i in range(3, int(math.sqrt(n)) + 1, 2):
while n % i == 0:
pf.append(i)
n = int(n / i)
if n > 2:
pf.append(n)
return pf
def number_of_divisors(n: int) -> int:
"""Calculate Number of Divisors of an Integer.
>>> number_of_divisors(100)
9
>>> number_of_divisors(0)
Traceback (most recent call last):
...
ValueError: Only positive numbers are accepted
>>> number_of_divisors(-10)
Traceback (most recent call last):
...
ValueError: Only positive numbers are accepted
"""
if n <= 0:
raise ValueError("Only positive numbers are accepted")
div = 1
temp = 1
while n % 2 == 0:
temp += 1
n = int(n / 2)
div *= temp
for i in range(3, int(math.sqrt(n)) + 1, 2):
temp = 1
while n % i == 0:
temp += 1
n = int(n / i)
div *= temp
if n > 1:
div *= 2
return div
def sum_of_divisors(n: int) -> int:
"""Calculate Sum of Divisors.
>>> sum_of_divisors(100)
217
>>> sum_of_divisors(0)
Traceback (most recent call last):
...
ValueError: Only positive numbers are accepted
>>> sum_of_divisors(-10)
Traceback (most recent call last):
...
ValueError: Only positive numbers are accepted
"""
if n <= 0:
raise ValueError("Only positive numbers are accepted")
s = 1
temp = 1
while n % 2 == 0:
temp += 1
n = int(n / 2)
if temp > 1:
s *= (2**temp - 1) / (2 - 1)
for i in range(3, int(math.sqrt(n)) + 1, 2):
temp = 1
while n % i == 0:
temp += 1
n = int(n / i)
if temp > 1:
s *= (i**temp - 1) / (i - 1)
return int(s)
def euler_phi(n: int) -> int:
"""Calculate Euler's Phi Function.
>>> euler_phi(100)
40
"""
s = n
for x in set(prime_factors(n)):
s *= (x - 1) / x
return int(s)
if __name__ == "__main__":
import doctest
doctest.testmod()
| """Implementation of Basic Math in Python."""
import math
def prime_factors(n: int) -> list:
"""Find Prime Factors.
>>> prime_factors(100)
[2, 2, 5, 5]
>>> prime_factors(0)
Traceback (most recent call last):
...
ValueError: Only positive integers have prime factors
>>> prime_factors(-10)
Traceback (most recent call last):
...
ValueError: Only positive integers have prime factors
"""
if n <= 0:
raise ValueError("Only positive integers have prime factors")
pf = []
while n % 2 == 0:
pf.append(2)
n = int(n / 2)
for i in range(3, int(math.sqrt(n)) + 1, 2):
while n % i == 0:
pf.append(i)
n = int(n / i)
if n > 2:
pf.append(n)
return pf
def number_of_divisors(n: int) -> int:
"""Calculate Number of Divisors of an Integer.
>>> number_of_divisors(100)
9
>>> number_of_divisors(0)
Traceback (most recent call last):
...
ValueError: Only positive numbers are accepted
>>> number_of_divisors(-10)
Traceback (most recent call last):
...
ValueError: Only positive numbers are accepted
"""
if n <= 0:
raise ValueError("Only positive numbers are accepted")
div = 1
temp = 1
while n % 2 == 0:
temp += 1
n = int(n / 2)
div *= temp
for i in range(3, int(math.sqrt(n)) + 1, 2):
temp = 1
while n % i == 0:
temp += 1
n = int(n / i)
div *= temp
if n > 1:
div *= 2
return div
def sum_of_divisors(n: int) -> int:
"""Calculate Sum of Divisors.
>>> sum_of_divisors(100)
217
>>> sum_of_divisors(0)
Traceback (most recent call last):
...
ValueError: Only positive numbers are accepted
>>> sum_of_divisors(-10)
Traceback (most recent call last):
...
ValueError: Only positive numbers are accepted
"""
if n <= 0:
raise ValueError("Only positive numbers are accepted")
s = 1
temp = 1
while n % 2 == 0:
temp += 1
n = int(n / 2)
if temp > 1:
s *= (2**temp - 1) / (2 - 1)
for i in range(3, int(math.sqrt(n)) + 1, 2):
temp = 1
while n % i == 0:
temp += 1
n = int(n / i)
if temp > 1:
s *= (i**temp - 1) / (i - 1)
return int(s)
def euler_phi(n: int) -> int:
"""Calculate Euler's Phi Function.
>>> euler_phi(100)
40
"""
s = n
for x in set(prime_factors(n)):
s *= (x - 1) / x
return int(s)
if __name__ == "__main__":
import doctest
doctest.testmod()
| -1 |
TheAlgorithms/Python | 9,707 | Fix mypy errors in circular_linked_list.py and swap_nodes.py | ### Describe your change:
Fixes #9710
Fix mypy errors in `data_structures/linked_list/circular_linked_list.py` and `data_structures/linked_list/swap_nodes.py` that were let through in #9668. These errors are causing every new PR to fail the pre-commit check.
My fixes primarily consisted of `assert` statements and type hints in order to convince mypy of (implicitly understood) type guarantees.
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
| tianyizheng02 | "2023-10-04T14:51:39Z" | "2023-10-04T16:05:01Z" | 3fd3497f15982a7286326b520b5e7b52767da1f3 | dfdd78135df938d948ba3044aca628aca08886e7 | Fix mypy errors in circular_linked_list.py and swap_nodes.py. ### Describe your change:
Fixes #9710
Fix mypy errors in `data_structures/linked_list/circular_linked_list.py` and `data_structures/linked_list/swap_nodes.py` that were let through in #9668. These errors are causing every new PR to fail the pre-commit check.
My fixes primarily consisted of `assert` statements and type hints in order to convince mypy of (implicitly understood) type guarantees.
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
| # Title: Dijkstra's Algorithm for finding single source shortest path from scratch
# Author: Shubham Malik
# References: https://en.wikipedia.org/wiki/Dijkstra%27s_algorithm
import math
import sys
# For storing the vertex set to retrieve node with the lowest distance
class PriorityQueue:
# Based on Min Heap
def __init__(self):
self.cur_size = 0
self.array = []
self.pos = {} # To store the pos of node in array
def is_empty(self):
return self.cur_size == 0
def min_heapify(self, idx):
lc = self.left(idx)
rc = self.right(idx)
if lc < self.cur_size and self.array(lc)[0] < self.array(idx)[0]:
smallest = lc
else:
smallest = idx
if rc < self.cur_size and self.array(rc)[0] < self.array(smallest)[0]:
smallest = rc
if smallest != idx:
self.swap(idx, smallest)
self.min_heapify(smallest)
def insert(self, tup):
# Inserts a node into the Priority Queue
self.pos[tup[1]] = self.cur_size
self.cur_size += 1
self.array.append((sys.maxsize, tup[1]))
self.decrease_key((sys.maxsize, tup[1]), tup[0])
def extract_min(self):
# Removes and returns the min element at top of priority queue
min_node = self.array[0][1]
self.array[0] = self.array[self.cur_size - 1]
self.cur_size -= 1
self.min_heapify(1)
del self.pos[min_node]
return min_node
def left(self, i):
# returns the index of left child
return 2 * i + 1
def right(self, i):
# returns the index of right child
return 2 * i + 2
def par(self, i):
# returns the index of parent
return math.floor(i / 2)
def swap(self, i, j):
# swaps array elements at indices i and j
# update the pos{}
self.pos[self.array[i][1]] = j
self.pos[self.array[j][1]] = i
temp = self.array[i]
self.array[i] = self.array[j]
self.array[j] = temp
def decrease_key(self, tup, new_d):
idx = self.pos[tup[1]]
# assuming the new_d is atmost old_d
self.array[idx] = (new_d, tup[1])
while idx > 0 and self.array[self.par(idx)][0] > self.array[idx][0]:
self.swap(idx, self.par(idx))
idx = self.par(idx)
class Graph:
def __init__(self, num):
self.adjList = {} # To store graph: u -> (v,w)
self.num_nodes = num # Number of nodes in graph
# To store the distance from source vertex
self.dist = [0] * self.num_nodes
self.par = [-1] * self.num_nodes # To store the path
def add_edge(self, u, v, w):
# Edge going from node u to v and v to u with weight w
# u (w)-> v, v (w) -> u
# Check if u already in graph
if u in self.adjList:
self.adjList[u].append((v, w))
else:
self.adjList[u] = [(v, w)]
# Assuming undirected graph
if v in self.adjList:
self.adjList[v].append((u, w))
else:
self.adjList[v] = [(u, w)]
def show_graph(self):
# u -> v(w)
for u in self.adjList:
print(u, "->", " -> ".join(str(f"{v}({w})") for v, w in self.adjList[u]))
def dijkstra(self, src):
# Flush old junk values in par[]
self.par = [-1] * self.num_nodes
# src is the source node
self.dist[src] = 0
q = PriorityQueue()
q.insert((0, src)) # (dist from src, node)
for u in self.adjList:
if u != src:
self.dist[u] = sys.maxsize # Infinity
self.par[u] = -1
while not q.is_empty():
u = q.extract_min() # Returns node with the min dist from source
# Update the distance of all the neighbours of u and
# if their prev dist was INFINITY then push them in Q
for v, w in self.adjList[u]:
new_dist = self.dist[u] + w
if self.dist[v] > new_dist:
if self.dist[v] == sys.maxsize:
q.insert((new_dist, v))
else:
q.decrease_key((self.dist[v], v), new_dist)
self.dist[v] = new_dist
self.par[v] = u
# Show the shortest distances from src
self.show_distances(src)
def show_distances(self, src):
print(f"Distance from node: {src}")
for u in range(self.num_nodes):
print(f"Node {u} has distance: {self.dist[u]}")
def show_path(self, src, dest):
# To show the shortest path from src to dest
# WARNING: Use it *after* calling dijkstra
path = []
cost = 0
temp = dest
# Backtracking from dest to src
while self.par[temp] != -1:
path.append(temp)
if temp != src:
for v, w in self.adjList[temp]:
if v == self.par[temp]:
cost += w
break
temp = self.par[temp]
path.append(src)
path.reverse()
print(f"----Path to reach {dest} from {src}----")
for u in path:
print(f"{u}", end=" ")
if u != dest:
print("-> ", end="")
print("\nTotal cost of path: ", cost)
if __name__ == "__main__":
graph = Graph(9)
graph.add_edge(0, 1, 4)
graph.add_edge(0, 7, 8)
graph.add_edge(1, 2, 8)
graph.add_edge(1, 7, 11)
graph.add_edge(2, 3, 7)
graph.add_edge(2, 8, 2)
graph.add_edge(2, 5, 4)
graph.add_edge(3, 4, 9)
graph.add_edge(3, 5, 14)
graph.add_edge(4, 5, 10)
graph.add_edge(5, 6, 2)
graph.add_edge(6, 7, 1)
graph.add_edge(6, 8, 6)
graph.add_edge(7, 8, 7)
graph.show_graph()
graph.dijkstra(0)
graph.show_path(0, 4)
# OUTPUT
# 0 -> 1(4) -> 7(8)
# 1 -> 0(4) -> 2(8) -> 7(11)
# 7 -> 0(8) -> 1(11) -> 6(1) -> 8(7)
# 2 -> 1(8) -> 3(7) -> 8(2) -> 5(4)
# 3 -> 2(7) -> 4(9) -> 5(14)
# 8 -> 2(2) -> 6(6) -> 7(7)
# 5 -> 2(4) -> 3(14) -> 4(10) -> 6(2)
# 4 -> 3(9) -> 5(10)
# 6 -> 5(2) -> 7(1) -> 8(6)
# Distance from node: 0
# Node 0 has distance: 0
# Node 1 has distance: 4
# Node 2 has distance: 12
# Node 3 has distance: 19
# Node 4 has distance: 21
# Node 5 has distance: 11
# Node 6 has distance: 9
# Node 7 has distance: 8
# Node 8 has distance: 14
# ----Path to reach 4 from 0----
# 0 -> 7 -> 6 -> 5 -> 4
# Total cost of path: 21
| # Title: Dijkstra's Algorithm for finding single source shortest path from scratch
# Author: Shubham Malik
# References: https://en.wikipedia.org/wiki/Dijkstra%27s_algorithm
import math
import sys
# For storing the vertex set to retrieve node with the lowest distance
class PriorityQueue:
# Based on Min Heap
def __init__(self):
self.cur_size = 0
self.array = []
self.pos = {} # To store the pos of node in array
def is_empty(self):
return self.cur_size == 0
def min_heapify(self, idx):
lc = self.left(idx)
rc = self.right(idx)
if lc < self.cur_size and self.array(lc)[0] < self.array(idx)[0]:
smallest = lc
else:
smallest = idx
if rc < self.cur_size and self.array(rc)[0] < self.array(smallest)[0]:
smallest = rc
if smallest != idx:
self.swap(idx, smallest)
self.min_heapify(smallest)
def insert(self, tup):
# Inserts a node into the Priority Queue
self.pos[tup[1]] = self.cur_size
self.cur_size += 1
self.array.append((sys.maxsize, tup[1]))
self.decrease_key((sys.maxsize, tup[1]), tup[0])
def extract_min(self):
# Removes and returns the min element at top of priority queue
min_node = self.array[0][1]
self.array[0] = self.array[self.cur_size - 1]
self.cur_size -= 1
self.min_heapify(1)
del self.pos[min_node]
return min_node
def left(self, i):
# returns the index of left child
return 2 * i + 1
def right(self, i):
# returns the index of right child
return 2 * i + 2
def par(self, i):
# returns the index of parent
return math.floor(i / 2)
def swap(self, i, j):
# swaps array elements at indices i and j
# update the pos{}
self.pos[self.array[i][1]] = j
self.pos[self.array[j][1]] = i
temp = self.array[i]
self.array[i] = self.array[j]
self.array[j] = temp
def decrease_key(self, tup, new_d):
idx = self.pos[tup[1]]
# assuming the new_d is atmost old_d
self.array[idx] = (new_d, tup[1])
while idx > 0 and self.array[self.par(idx)][0] > self.array[idx][0]:
self.swap(idx, self.par(idx))
idx = self.par(idx)
class Graph:
def __init__(self, num):
self.adjList = {} # To store graph: u -> (v,w)
self.num_nodes = num # Number of nodes in graph
# To store the distance from source vertex
self.dist = [0] * self.num_nodes
self.par = [-1] * self.num_nodes # To store the path
def add_edge(self, u, v, w):
# Edge going from node u to v and v to u with weight w
# u (w)-> v, v (w) -> u
# Check if u already in graph
if u in self.adjList:
self.adjList[u].append((v, w))
else:
self.adjList[u] = [(v, w)]
# Assuming undirected graph
if v in self.adjList:
self.adjList[v].append((u, w))
else:
self.adjList[v] = [(u, w)]
def show_graph(self):
# u -> v(w)
for u in self.adjList:
print(u, "->", " -> ".join(str(f"{v}({w})") for v, w in self.adjList[u]))
def dijkstra(self, src):
# Flush old junk values in par[]
self.par = [-1] * self.num_nodes
# src is the source node
self.dist[src] = 0
q = PriorityQueue()
q.insert((0, src)) # (dist from src, node)
for u in self.adjList:
if u != src:
self.dist[u] = sys.maxsize # Infinity
self.par[u] = -1
while not q.is_empty():
u = q.extract_min() # Returns node with the min dist from source
# Update the distance of all the neighbours of u and
# if their prev dist was INFINITY then push them in Q
for v, w in self.adjList[u]:
new_dist = self.dist[u] + w
if self.dist[v] > new_dist:
if self.dist[v] == sys.maxsize:
q.insert((new_dist, v))
else:
q.decrease_key((self.dist[v], v), new_dist)
self.dist[v] = new_dist
self.par[v] = u
# Show the shortest distances from src
self.show_distances(src)
def show_distances(self, src):
print(f"Distance from node: {src}")
for u in range(self.num_nodes):
print(f"Node {u} has distance: {self.dist[u]}")
def show_path(self, src, dest):
# To show the shortest path from src to dest
# WARNING: Use it *after* calling dijkstra
path = []
cost = 0
temp = dest
# Backtracking from dest to src
while self.par[temp] != -1:
path.append(temp)
if temp != src:
for v, w in self.adjList[temp]:
if v == self.par[temp]:
cost += w
break
temp = self.par[temp]
path.append(src)
path.reverse()
print(f"----Path to reach {dest} from {src}----")
for u in path:
print(f"{u}", end=" ")
if u != dest:
print("-> ", end="")
print("\nTotal cost of path: ", cost)
if __name__ == "__main__":
graph = Graph(9)
graph.add_edge(0, 1, 4)
graph.add_edge(0, 7, 8)
graph.add_edge(1, 2, 8)
graph.add_edge(1, 7, 11)
graph.add_edge(2, 3, 7)
graph.add_edge(2, 8, 2)
graph.add_edge(2, 5, 4)
graph.add_edge(3, 4, 9)
graph.add_edge(3, 5, 14)
graph.add_edge(4, 5, 10)
graph.add_edge(5, 6, 2)
graph.add_edge(6, 7, 1)
graph.add_edge(6, 8, 6)
graph.add_edge(7, 8, 7)
graph.show_graph()
graph.dijkstra(0)
graph.show_path(0, 4)
# OUTPUT
# 0 -> 1(4) -> 7(8)
# 1 -> 0(4) -> 2(8) -> 7(11)
# 7 -> 0(8) -> 1(11) -> 6(1) -> 8(7)
# 2 -> 1(8) -> 3(7) -> 8(2) -> 5(4)
# 3 -> 2(7) -> 4(9) -> 5(14)
# 8 -> 2(2) -> 6(6) -> 7(7)
# 5 -> 2(4) -> 3(14) -> 4(10) -> 6(2)
# 4 -> 3(9) -> 5(10)
# 6 -> 5(2) -> 7(1) -> 8(6)
# Distance from node: 0
# Node 0 has distance: 0
# Node 1 has distance: 4
# Node 2 has distance: 12
# Node 3 has distance: 19
# Node 4 has distance: 21
# Node 5 has distance: 11
# Node 6 has distance: 9
# Node 7 has distance: 8
# Node 8 has distance: 14
# ----Path to reach 4 from 0----
# 0 -> 7 -> 6 -> 5 -> 4
# Total cost of path: 21
| -1 |
TheAlgorithms/Python | 9,576 | Upgrade to Python 3.12 | ### Describe your change:
Repeats #8777
Repeats #9306
* #8777
* #9306
**Disables** algorithms based on `qiskit` and `tensorflow` because those modules are not yet compatible with Python 3.12.
* https://github.com/tensorflow/tensorflow/releases
* https://github.com/Qiskit/qiskit/issues/10887
---
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
| cclauss | "2023-10-03T08:13:43Z" | "2023-10-03T09:17:11Z" | f964dcbf2ff7c70e4aca20532a38dfb02ce8a4c0 | 0f4e51245f33175b4fb311f633d3821210741bdd | Upgrade to Python 3.12. ### Describe your change:
Repeats #8777
Repeats #9306
* #8777
* #9306
**Disables** algorithms based on `qiskit` and `tensorflow` because those modules are not yet compatible with Python 3.12.
* https://github.com/tensorflow/tensorflow/releases
* https://github.com/Qiskit/qiskit/issues/10887
---
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
| name: "build"
on:
pull_request:
schedule:
- cron: "0 0 * * *" # Run everyday
jobs:
build:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
- uses: actions/setup-python@v4
with:
python-version: 3.11
- uses: actions/cache@v3
with:
path: ~/.cache/pip
key: ${{ runner.os }}-pip-${{ hashFiles('requirements.txt') }}
- name: Install dependencies
run: |
python -m pip install --upgrade pip setuptools six wheel
python -m pip install pytest-cov -r requirements.txt
- name: Run tests
# TODO: #8818 Re-enable quantum tests
run: pytest
--ignore=quantum/q_fourier_transform.py
--ignore=project_euler/
--ignore=scripts/validate_solutions.py
--cov-report=term-missing:skip-covered
--cov=. .
- if: ${{ success() }}
run: scripts/build_directory_md.py 2>&1 | tee DIRECTORY.md
| name: "build"
on:
pull_request:
schedule:
- cron: "0 0 * * *" # Run everyday
jobs:
build:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: actions/setup-python@v4
with:
python-version: 3.12
allow-prereleases: true
- uses: actions/cache@v3
with:
path: ~/.cache/pip
key: ${{ runner.os }}-pip-${{ hashFiles('requirements.txt') }}
- name: Install dependencies
run: |
python -m pip install --upgrade pip setuptools six wheel
python -m pip install pytest-cov -r requirements.txt
- name: Run tests
# TODO: #8818 Re-enable quantum tests
run: pytest
--ignore=quantum/q_fourier_transform.py
--ignore=project_euler/
--ignore=scripts/validate_solutions.py
--cov-report=term-missing:skip-covered
--cov=. .
- if: ${{ success() }}
run: scripts/build_directory_md.py 2>&1 | tee DIRECTORY.md
| 1 |
TheAlgorithms/Python | 9,576 | Upgrade to Python 3.12 | ### Describe your change:
Repeats #8777
Repeats #9306
* #8777
* #9306
**Disables** algorithms based on `qiskit` and `tensorflow` because those modules are not yet compatible with Python 3.12.
* https://github.com/tensorflow/tensorflow/releases
* https://github.com/Qiskit/qiskit/issues/10887
---
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
| cclauss | "2023-10-03T08:13:43Z" | "2023-10-03T09:17:11Z" | f964dcbf2ff7c70e4aca20532a38dfb02ce8a4c0 | 0f4e51245f33175b4fb311f633d3821210741bdd | Upgrade to Python 3.12. ### Describe your change:
Repeats #8777
Repeats #9306
* #8777
* #9306
**Disables** algorithms based on `qiskit` and `tensorflow` because those modules are not yet compatible with Python 3.12.
* https://github.com/tensorflow/tensorflow/releases
* https://github.com/Qiskit/qiskit/issues/10887
---
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
| # https://beta.ruff.rs
name: ruff
on:
push:
branches:
- master
pull_request:
branches:
- master
jobs:
ruff:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
- run: pip install --user ruff
- run: ruff --output-format=github .
| # https://beta.ruff.rs
name: ruff
on:
push:
branches:
- master
pull_request:
branches:
- master
jobs:
ruff:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- run: pip install --user ruff
- run: ruff --output-format=github .
| 1 |
TheAlgorithms/Python | 9,576 | Upgrade to Python 3.12 | ### Describe your change:
Repeats #8777
Repeats #9306
* #8777
* #9306
**Disables** algorithms based on `qiskit` and `tensorflow` because those modules are not yet compatible with Python 3.12.
* https://github.com/tensorflow/tensorflow/releases
* https://github.com/Qiskit/qiskit/issues/10887
---
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
| cclauss | "2023-10-03T08:13:43Z" | "2023-10-03T09:17:11Z" | f964dcbf2ff7c70e4aca20532a38dfb02ce8a4c0 | 0f4e51245f33175b4fb311f633d3821210741bdd | Upgrade to Python 3.12. ### Describe your change:
Repeats #8777
Repeats #9306
* #8777
* #9306
**Disables** algorithms based on `qiskit` and `tensorflow` because those modules are not yet compatible with Python 3.12.
* https://github.com/tensorflow/tensorflow/releases
* https://github.com/Qiskit/qiskit/issues/10887
---
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
| # Contributing guidelines
## Before contributing
Welcome to [TheAlgorithms/Python](https://github.com/TheAlgorithms/Python)! Before sending your pull requests, make sure that you __read the whole guidelines__. If you have any doubt on the contributing guide, please feel free to [state it clearly in an issue](https://github.com/TheAlgorithms/Python/issues/new) or ask the community in [Gitter](https://gitter.im/TheAlgorithms/community).
## Contributing
### Contributor
We are very happy that you are considering implementing algorithms and data structures for others! This repository is referenced and used by learners from all over the globe. Being one of our contributors, you agree and confirm that:
- You did your work - no plagiarism allowed
- Any plagiarized work will not be merged.
- Your work will be distributed under [MIT License](LICENSE.md) once your pull request is merged
- Your submitted work fulfils or mostly fulfils our styles and standards
__New implementation__ is welcome! For example, new solutions for a problem, different representations for a graph data structure or algorithm designs with different complexity but __identical implementation__ of an existing implementation is not allowed. Please check whether the solution is already implemented or not before submitting your pull request.
__Improving comments__ and __writing proper tests__ are also highly welcome.
### Contribution
We appreciate any contribution, from fixing a grammar mistake in a comment to implementing complex algorithms. Please read this section if you are contributing your work.
Your contribution will be tested by our [automated testing on GitHub Actions](https://github.com/TheAlgorithms/Python/actions) to save time and mental energy. After you have submitted your pull request, you should see the GitHub Actions tests start to run at the bottom of your submission page. If those tests fail, then click on the ___details___ button try to read through the GitHub Actions output to understand the failure. If you do not understand, please leave a comment on your submission page and a community member will try to help.
If you are interested in resolving an [open issue](https://github.com/TheAlgorithms/Python/issues), simply make a pull request with your proposed fix. __We do not assign issues in this repo__ so please do not ask for permission to work on an issue.
Please help us keep our issue list small by adding `Fixes #{$ISSUE_NUMBER}` to the description of pull requests that resolve open issues.
For example, if your pull request fixes issue #10, then please add the following to its description:
```
Fixes #10
```
GitHub will use this tag to [auto-close the issue](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue) if and when the PR is merged.
#### What is an Algorithm?
An Algorithm is one or more functions (or classes) that:
* take one or more inputs,
* perform some internal calculations or data manipulations,
* return one or more outputs,
* have minimal side effects (Ex. `print()`, `plot()`, `read()`, `write()`).
Algorithms should be packaged in a way that would make it easy for readers to put them into larger programs.
Algorithms should:
* have intuitive class and function names that make their purpose clear to readers
* use Python naming conventions and intuitive variable names to ease comprehension
* be flexible to take different input values
* have Python type hints for their input parameters and return values
* raise Python exceptions (`ValueError`, etc.) on erroneous input values
* have docstrings with clear explanations and/or URLs to source materials
* contain doctests that test both valid and erroneous input values
* return all calculation results instead of printing or plotting them
Algorithms in this repo should not be how-to examples for existing Python packages. Instead, they should perform internal calculations or manipulations to convert input values into different output values. Those calculations or manipulations can use data types, classes, or functions of existing Python packages but each algorithm in this repo should add unique value.
#### Pre-commit plugin
Use [pre-commit](https://pre-commit.com/#installation) to automatically format your code to match our coding style:
```bash
python3 -m pip install pre-commit # only required the first time
pre-commit install
```
That's it! The plugin will run every time you commit any changes. If there are any errors found during the run, fix them and commit those changes. You can even run the plugin manually on all files:
```bash
pre-commit run --all-files --show-diff-on-failure
```
#### Coding Style
We want your work to be readable by others; therefore, we encourage you to note the following:
- Please write in Python 3.11+. For instance: `print()` is a function in Python 3 so `print "Hello"` will *not* work but `print("Hello")` will.
- Please focus hard on the naming of functions, classes, and variables. Help your reader by using __descriptive names__ that can help you to remove redundant comments.
- Single letter variable names are *old school* so please avoid them unless their life only spans a few lines.
- Expand acronyms because `gcd()` is hard to understand but `greatest_common_divisor()` is not.
- Please follow the [Python Naming Conventions](https://pep8.org/#prescriptive-naming-conventions) so variable_names and function_names should be lower_case, CONSTANTS in UPPERCASE, ClassNames should be CamelCase, etc.
- We encourage the use of Python [f-strings](https://realpython.com/python-f-strings/#f-strings-a-new-and-improved-way-to-format-strings-in-python) where they make the code easier to read.
- Please consider running [__psf/black__](https://github.com/python/black) on your Python file(s) before submitting your pull request. This is not yet a requirement but it does make your code more readable and automatically aligns it with much of [PEP 8](https://www.python.org/dev/peps/pep-0008/). There are other code formatters (autopep8, yapf) but the __black__ formatter is now hosted by the Python Software Foundation. To use it,
```bash
python3 -m pip install black # only required the first time
black .
```
- All submissions will need to pass the test `ruff .` before they will be accepted so if possible, try this test locally on your Python file(s) before submitting your pull request.
```bash
python3 -m pip install ruff # only required the first time
ruff .
```
- Original code submission require docstrings or comments to describe your work.
- More on docstrings and comments:
If you used a Wikipedia article or some other source material to create your algorithm, please add the URL in a docstring or comment to help your reader.
The following are considered to be bad and may be requested to be improved:
```python
x = x + 2 # increased by 2
```
This is too trivial. Comments are expected to be explanatory. For comments, you can write them above, on or below a line of code, as long as you are consistent within the same piece of code.
We encourage you to put docstrings inside your functions but please pay attention to the indentation of docstrings. The following is a good example:
```python
def sum_ab(a, b):
"""
Return the sum of two integers a and b.
"""
return a + b
```
- Write tests (especially [__doctests__](https://docs.python.org/3/library/doctest.html)) to illustrate and verify your work. We highly encourage the use of _doctests on all functions_.
```python
def sum_ab(a, b):
"""
Return the sum of two integers a and b
>>> sum_ab(2, 2)
4
>>> sum_ab(-2, 3)
1
>>> sum_ab(4.9, 5.1)
10.0
"""
return a + b
```
These doctests will be run by pytest as part of our automated testing so please try to run your doctests locally and make sure that they are found and pass:
```bash
python3 -m doctest -v my_submission.py
```
The use of the Python builtin `input()` function is __not__ encouraged:
```python
input('Enter your input:')
# Or even worse...
input = eval(input("Enter your input: "))
```
However, if your code uses `input()` then we encourage you to gracefully deal with leading and trailing whitespace in user input by adding `.strip()` as in:
```python
starting_value = int(input("Please enter a starting value: ").strip())
```
The use of [Python type hints](https://docs.python.org/3/library/typing.html) is encouraged for function parameters and return values. Our automated testing will run [mypy](http://mypy-lang.org) so run that locally before making your submission.
```python
def sum_ab(a: int, b: int) -> int:
return a + b
```
Instructions on how to install mypy can be found [here](https://github.com/python/mypy). Please use the command `mypy --ignore-missing-imports .` to test all files or `mypy --ignore-missing-imports path/to/file.py` to test a specific file.
- [__List comprehensions and generators__](https://docs.python.org/3/tutorial/datastructures.html#list-comprehensions) are preferred over the use of `lambda`, `map`, `filter`, `reduce` but the important thing is to demonstrate the power of Python in code that is easy to read and maintain.
- Avoid importing external libraries for basic algorithms. Only use those libraries for complicated algorithms.
- If you need a third-party module that is not in the file __requirements.txt__, please add it to that file as part of your submission.
#### Other Requirements for Submissions
- If you are submitting code in the `project_euler/` directory, please also read [the dedicated Guideline](https://github.com/TheAlgorithms/Python/blob/master/project_euler/README.md) before contributing to our Project Euler library.
- The file extension for code files should be `.py`. Jupyter Notebooks should be submitted to [TheAlgorithms/Jupyter](https://github.com/TheAlgorithms/Jupyter).
- Strictly use snake_case (underscore_separated) in your file_name, as it will be easy to parse in future using scripts.
- Please avoid creating new directories if at all possible. Try to fit your work into the existing directory structure.
- If possible, follow the standard *within* the folder you are submitting to.
- If you have modified/added code work, make sure the code compiles before submitting.
- If you have modified/added documentation work, ensure your language is concise and contains no grammar errors.
- Do not update the README.md or DIRECTORY.md file which will be periodically autogenerated by our GitHub Actions processes.
- Add a corresponding explanation to [Algorithms-Explanation](https://github.com/TheAlgorithms/Algorithms-Explanation) (Optional but recommended).
- All submissions will be tested with [__mypy__](http://www.mypy-lang.org) so we encourage you to add [__Python type hints__](https://docs.python.org/3/library/typing.html) where it makes sense to do so.
- Most importantly,
- __Be consistent in the use of these guidelines when submitting.__
- __Join__ us on [Discord](https://discord.com/invite/c7MnfGFGa6) and [Gitter](https://gitter.im/TheAlgorithms/community) __now!__
- Happy coding!
Writer [@poyea](https://github.com/poyea), Jun 2019.
| # Contributing guidelines
## Before contributing
Welcome to [TheAlgorithms/Python](https://github.com/TheAlgorithms/Python)! Before sending your pull requests, make sure that you __read the whole guidelines__. If you have any doubt on the contributing guide, please feel free to [state it clearly in an issue](https://github.com/TheAlgorithms/Python/issues/new) or ask the community in [Gitter](https://gitter.im/TheAlgorithms/community).
## Contributing
### Contributor
We are very happy that you are considering implementing algorithms and data structures for others! This repository is referenced and used by learners from all over the globe. Being one of our contributors, you agree and confirm that:
- You did your work - no plagiarism allowed
- Any plagiarized work will not be merged.
- Your work will be distributed under [MIT License](LICENSE.md) once your pull request is merged
- Your submitted work fulfils or mostly fulfils our styles and standards
__New implementation__ is welcome! For example, new solutions for a problem, different representations for a graph data structure or algorithm designs with different complexity but __identical implementation__ of an existing implementation is not allowed. Please check whether the solution is already implemented or not before submitting your pull request.
__Improving comments__ and __writing proper tests__ are also highly welcome.
### Contribution
We appreciate any contribution, from fixing a grammar mistake in a comment to implementing complex algorithms. Please read this section if you are contributing your work.
Your contribution will be tested by our [automated testing on GitHub Actions](https://github.com/TheAlgorithms/Python/actions) to save time and mental energy. After you have submitted your pull request, you should see the GitHub Actions tests start to run at the bottom of your submission page. If those tests fail, then click on the ___details___ button try to read through the GitHub Actions output to understand the failure. If you do not understand, please leave a comment on your submission page and a community member will try to help.
If you are interested in resolving an [open issue](https://github.com/TheAlgorithms/Python/issues), simply make a pull request with your proposed fix. __We do not assign issues in this repo__ so please do not ask for permission to work on an issue.
Please help us keep our issue list small by adding `Fixes #{$ISSUE_NUMBER}` to the description of pull requests that resolve open issues.
For example, if your pull request fixes issue #10, then please add the following to its description:
```
Fixes #10
```
GitHub will use this tag to [auto-close the issue](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue) if and when the PR is merged.
#### What is an Algorithm?
An Algorithm is one or more functions (or classes) that:
* take one or more inputs,
* perform some internal calculations or data manipulations,
* return one or more outputs,
* have minimal side effects (Ex. `print()`, `plot()`, `read()`, `write()`).
Algorithms should be packaged in a way that would make it easy for readers to put them into larger programs.
Algorithms should:
* have intuitive class and function names that make their purpose clear to readers
* use Python naming conventions and intuitive variable names to ease comprehension
* be flexible to take different input values
* have Python type hints for their input parameters and return values
* raise Python exceptions (`ValueError`, etc.) on erroneous input values
* have docstrings with clear explanations and/or URLs to source materials
* contain doctests that test both valid and erroneous input values
* return all calculation results instead of printing or plotting them
Algorithms in this repo should not be how-to examples for existing Python packages. Instead, they should perform internal calculations or manipulations to convert input values into different output values. Those calculations or manipulations can use data types, classes, or functions of existing Python packages but each algorithm in this repo should add unique value.
#### Pre-commit plugin
Use [pre-commit](https://pre-commit.com/#installation) to automatically format your code to match our coding style:
```bash
python3 -m pip install pre-commit # only required the first time
pre-commit install
```
That's it! The plugin will run every time you commit any changes. If there are any errors found during the run, fix them and commit those changes. You can even run the plugin manually on all files:
```bash
pre-commit run --all-files --show-diff-on-failure
```
#### Coding Style
We want your work to be readable by others; therefore, we encourage you to note the following:
- Please write in Python 3.12+. For instance: `print()` is a function in Python 3 so `print "Hello"` will *not* work but `print("Hello")` will.
- Please focus hard on the naming of functions, classes, and variables. Help your reader by using __descriptive names__ that can help you to remove redundant comments.
- Single letter variable names are *old school* so please avoid them unless their life only spans a few lines.
- Expand acronyms because `gcd()` is hard to understand but `greatest_common_divisor()` is not.
- Please follow the [Python Naming Conventions](https://pep8.org/#prescriptive-naming-conventions) so variable_names and function_names should be lower_case, CONSTANTS in UPPERCASE, ClassNames should be CamelCase, etc.
- We encourage the use of Python [f-strings](https://realpython.com/python-f-strings/#f-strings-a-new-and-improved-way-to-format-strings-in-python) where they make the code easier to read.
- Please consider running [__psf/black__](https://github.com/python/black) on your Python file(s) before submitting your pull request. This is not yet a requirement but it does make your code more readable and automatically aligns it with much of [PEP 8](https://www.python.org/dev/peps/pep-0008/). There are other code formatters (autopep8, yapf) but the __black__ formatter is now hosted by the Python Software Foundation. To use it,
```bash
python3 -m pip install black # only required the first time
black .
```
- All submissions will need to pass the test `ruff .` before they will be accepted so if possible, try this test locally on your Python file(s) before submitting your pull request.
```bash
python3 -m pip install ruff # only required the first time
ruff .
```
- Original code submission require docstrings or comments to describe your work.
- More on docstrings and comments:
If you used a Wikipedia article or some other source material to create your algorithm, please add the URL in a docstring or comment to help your reader.
The following are considered to be bad and may be requested to be improved:
```python
x = x + 2 # increased by 2
```
This is too trivial. Comments are expected to be explanatory. For comments, you can write them above, on or below a line of code, as long as you are consistent within the same piece of code.
We encourage you to put docstrings inside your functions but please pay attention to the indentation of docstrings. The following is a good example:
```python
def sum_ab(a, b):
"""
Return the sum of two integers a and b.
"""
return a + b
```
- Write tests (especially [__doctests__](https://docs.python.org/3/library/doctest.html)) to illustrate and verify your work. We highly encourage the use of _doctests on all functions_.
```python
def sum_ab(a, b):
"""
Return the sum of two integers a and b
>>> sum_ab(2, 2)
4
>>> sum_ab(-2, 3)
1
>>> sum_ab(4.9, 5.1)
10.0
"""
return a + b
```
These doctests will be run by pytest as part of our automated testing so please try to run your doctests locally and make sure that they are found and pass:
```bash
python3 -m doctest -v my_submission.py
```
The use of the Python builtin `input()` function is __not__ encouraged:
```python
input('Enter your input:')
# Or even worse...
input = eval(input("Enter your input: "))
```
However, if your code uses `input()` then we encourage you to gracefully deal with leading and trailing whitespace in user input by adding `.strip()` as in:
```python
starting_value = int(input("Please enter a starting value: ").strip())
```
The use of [Python type hints](https://docs.python.org/3/library/typing.html) is encouraged for function parameters and return values. Our automated testing will run [mypy](http://mypy-lang.org) so run that locally before making your submission.
```python
def sum_ab(a: int, b: int) -> int:
return a + b
```
Instructions on how to install mypy can be found [here](https://github.com/python/mypy). Please use the command `mypy --ignore-missing-imports .` to test all files or `mypy --ignore-missing-imports path/to/file.py` to test a specific file.
- [__List comprehensions and generators__](https://docs.python.org/3/tutorial/datastructures.html#list-comprehensions) are preferred over the use of `lambda`, `map`, `filter`, `reduce` but the important thing is to demonstrate the power of Python in code that is easy to read and maintain.
- Avoid importing external libraries for basic algorithms. Only use those libraries for complicated algorithms.
- If you need a third-party module that is not in the file __requirements.txt__, please add it to that file as part of your submission.
#### Other Requirements for Submissions
- If you are submitting code in the `project_euler/` directory, please also read [the dedicated Guideline](https://github.com/TheAlgorithms/Python/blob/master/project_euler/README.md) before contributing to our Project Euler library.
- The file extension for code files should be `.py`. Jupyter Notebooks should be submitted to [TheAlgorithms/Jupyter](https://github.com/TheAlgorithms/Jupyter).
- Strictly use snake_case (underscore_separated) in your file_name, as it will be easy to parse in future using scripts.
- Please avoid creating new directories if at all possible. Try to fit your work into the existing directory structure.
- If possible, follow the standard *within* the folder you are submitting to.
- If you have modified/added code work, make sure the code compiles before submitting.
- If you have modified/added documentation work, ensure your language is concise and contains no grammar errors.
- Do not update the README.md or DIRECTORY.md file which will be periodically autogenerated by our GitHub Actions processes.
- Add a corresponding explanation to [Algorithms-Explanation](https://github.com/TheAlgorithms/Algorithms-Explanation) (Optional but recommended).
- All submissions will be tested with [__mypy__](http://www.mypy-lang.org) so we encourage you to add [__Python type hints__](https://docs.python.org/3/library/typing.html) where it makes sense to do so.
- Most importantly,
- __Be consistent in the use of these guidelines when submitting.__
- __Join__ us on [Discord](https://discord.com/invite/c7MnfGFGa6) and [Gitter](https://gitter.im/TheAlgorithms/community) __now!__
- Happy coding!
Writer [@poyea](https://github.com/poyea), Jun 2019.
| 1 |
TheAlgorithms/Python | 9,576 | Upgrade to Python 3.12 | ### Describe your change:
Repeats #8777
Repeats #9306
* #8777
* #9306
**Disables** algorithms based on `qiskit` and `tensorflow` because those modules are not yet compatible with Python 3.12.
* https://github.com/tensorflow/tensorflow/releases
* https://github.com/Qiskit/qiskit/issues/10887
---
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
| cclauss | "2023-10-03T08:13:43Z" | "2023-10-03T09:17:11Z" | f964dcbf2ff7c70e4aca20532a38dfb02ce8a4c0 | 0f4e51245f33175b4fb311f633d3821210741bdd | Upgrade to Python 3.12. ### Describe your change:
Repeats #8777
Repeats #9306
* #8777
* #9306
**Disables** algorithms based on `qiskit` and `tensorflow` because those modules are not yet compatible with Python 3.12.
* https://github.com/tensorflow/tensorflow/releases
* https://github.com/Qiskit/qiskit/issues/10887
---
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
|
## Arithmetic Analysis
* [Bisection](arithmetic_analysis/bisection.py)
* [Gaussian Elimination](arithmetic_analysis/gaussian_elimination.py)
* [In Static Equilibrium](arithmetic_analysis/in_static_equilibrium.py)
* [Intersection](arithmetic_analysis/intersection.py)
* [Jacobi Iteration Method](arithmetic_analysis/jacobi_iteration_method.py)
* [Lu Decomposition](arithmetic_analysis/lu_decomposition.py)
* [Newton Forward Interpolation](arithmetic_analysis/newton_forward_interpolation.py)
* [Newton Method](arithmetic_analysis/newton_method.py)
* [Newton Raphson](arithmetic_analysis/newton_raphson.py)
* [Newton Raphson New](arithmetic_analysis/newton_raphson_new.py)
* [Secant Method](arithmetic_analysis/secant_method.py)
## Audio Filters
* [Butterworth Filter](audio_filters/butterworth_filter.py)
* [Iir Filter](audio_filters/iir_filter.py)
* [Show Response](audio_filters/show_response.py)
## Backtracking
* [All Combinations](backtracking/all_combinations.py)
* [All Permutations](backtracking/all_permutations.py)
* [All Subsequences](backtracking/all_subsequences.py)
* [Coloring](backtracking/coloring.py)
* [Combination Sum](backtracking/combination_sum.py)
* [Hamiltonian Cycle](backtracking/hamiltonian_cycle.py)
* [Knight Tour](backtracking/knight_tour.py)
* [Minimax](backtracking/minimax.py)
* [Minmax](backtracking/minmax.py)
* [N Queens](backtracking/n_queens.py)
* [N Queens Math](backtracking/n_queens_math.py)
* [Power Sum](backtracking/power_sum.py)
* [Rat In Maze](backtracking/rat_in_maze.py)
* [Sudoku](backtracking/sudoku.py)
* [Sum Of Subsets](backtracking/sum_of_subsets.py)
* [Word Search](backtracking/word_search.py)
## Bit Manipulation
* [Binary And Operator](bit_manipulation/binary_and_operator.py)
* [Binary Count Setbits](bit_manipulation/binary_count_setbits.py)
* [Binary Count Trailing Zeros](bit_manipulation/binary_count_trailing_zeros.py)
* [Binary Or Operator](bit_manipulation/binary_or_operator.py)
* [Binary Shifts](bit_manipulation/binary_shifts.py)
* [Binary Twos Complement](bit_manipulation/binary_twos_complement.py)
* [Binary Xor Operator](bit_manipulation/binary_xor_operator.py)
* [Bitwise Addition Recursive](bit_manipulation/bitwise_addition_recursive.py)
* [Count 1S Brian Kernighan Method](bit_manipulation/count_1s_brian_kernighan_method.py)
* [Count Number Of One Bits](bit_manipulation/count_number_of_one_bits.py)
* [Gray Code Sequence](bit_manipulation/gray_code_sequence.py)
* [Highest Set Bit](bit_manipulation/highest_set_bit.py)
* [Index Of Rightmost Set Bit](bit_manipulation/index_of_rightmost_set_bit.py)
* [Is Even](bit_manipulation/is_even.py)
* [Is Power Of Two](bit_manipulation/is_power_of_two.py)
* [Missing Number](bit_manipulation/missing_number.py)
* [Numbers Different Signs](bit_manipulation/numbers_different_signs.py)
* [Reverse Bits](bit_manipulation/reverse_bits.py)
* [Single Bit Manipulation Operations](bit_manipulation/single_bit_manipulation_operations.py)
## Blockchain
* [Chinese Remainder Theorem](blockchain/chinese_remainder_theorem.py)
* [Diophantine Equation](blockchain/diophantine_equation.py)
* [Modular Division](blockchain/modular_division.py)
## Boolean Algebra
* [And Gate](boolean_algebra/and_gate.py)
* [Nand Gate](boolean_algebra/nand_gate.py)
* [Nor Gate](boolean_algebra/nor_gate.py)
* [Not Gate](boolean_algebra/not_gate.py)
* [Or Gate](boolean_algebra/or_gate.py)
* [Quine Mc Cluskey](boolean_algebra/quine_mc_cluskey.py)
* [Xnor Gate](boolean_algebra/xnor_gate.py)
* [Xor Gate](boolean_algebra/xor_gate.py)
## Cellular Automata
* [Conways Game Of Life](cellular_automata/conways_game_of_life.py)
* [Game Of Life](cellular_automata/game_of_life.py)
* [Langtons Ant](cellular_automata/langtons_ant.py)
* [Nagel Schrekenberg](cellular_automata/nagel_schrekenberg.py)
* [One Dimensional](cellular_automata/one_dimensional.py)
* [Wa Tor](cellular_automata/wa_tor.py)
## Ciphers
* [A1Z26](ciphers/a1z26.py)
* [Affine Cipher](ciphers/affine_cipher.py)
* [Atbash](ciphers/atbash.py)
* [Autokey](ciphers/autokey.py)
* [Baconian Cipher](ciphers/baconian_cipher.py)
* [Base16](ciphers/base16.py)
* [Base32](ciphers/base32.py)
* [Base64](ciphers/base64.py)
* [Base85](ciphers/base85.py)
* [Beaufort Cipher](ciphers/beaufort_cipher.py)
* [Bifid](ciphers/bifid.py)
* [Brute Force Caesar Cipher](ciphers/brute_force_caesar_cipher.py)
* [Caesar Cipher](ciphers/caesar_cipher.py)
* [Cryptomath Module](ciphers/cryptomath_module.py)
* [Decrypt Caesar With Chi Squared](ciphers/decrypt_caesar_with_chi_squared.py)
* [Deterministic Miller Rabin](ciphers/deterministic_miller_rabin.py)
* [Diffie](ciphers/diffie.py)
* [Diffie Hellman](ciphers/diffie_hellman.py)
* [Elgamal Key Generator](ciphers/elgamal_key_generator.py)
* [Enigma Machine2](ciphers/enigma_machine2.py)
* [Hill Cipher](ciphers/hill_cipher.py)
* [Mixed Keyword Cypher](ciphers/mixed_keyword_cypher.py)
* [Mono Alphabetic Ciphers](ciphers/mono_alphabetic_ciphers.py)
* [Morse Code](ciphers/morse_code.py)
* [Onepad Cipher](ciphers/onepad_cipher.py)
* [Playfair Cipher](ciphers/playfair_cipher.py)
* [Polybius](ciphers/polybius.py)
* [Porta Cipher](ciphers/porta_cipher.py)
* [Rabin Miller](ciphers/rabin_miller.py)
* [Rail Fence Cipher](ciphers/rail_fence_cipher.py)
* [Rot13](ciphers/rot13.py)
* [Rsa Cipher](ciphers/rsa_cipher.py)
* [Rsa Factorization](ciphers/rsa_factorization.py)
* [Rsa Key Generator](ciphers/rsa_key_generator.py)
* [Shuffled Shift Cipher](ciphers/shuffled_shift_cipher.py)
* [Simple Keyword Cypher](ciphers/simple_keyword_cypher.py)
* [Simple Substitution Cipher](ciphers/simple_substitution_cipher.py)
* [Trafid Cipher](ciphers/trafid_cipher.py)
* [Transposition Cipher](ciphers/transposition_cipher.py)
* [Transposition Cipher Encrypt Decrypt File](ciphers/transposition_cipher_encrypt_decrypt_file.py)
* [Vigenere Cipher](ciphers/vigenere_cipher.py)
* [Xor Cipher](ciphers/xor_cipher.py)
## Compression
* [Burrows Wheeler](compression/burrows_wheeler.py)
* [Huffman](compression/huffman.py)
* [Lempel Ziv](compression/lempel_ziv.py)
* [Lempel Ziv Decompress](compression/lempel_ziv_decompress.py)
* [Lz77](compression/lz77.py)
* [Peak Signal To Noise Ratio](compression/peak_signal_to_noise_ratio.py)
* [Run Length Encoding](compression/run_length_encoding.py)
## Computer Vision
* [Cnn Classification](computer_vision/cnn_classification.py)
* [Flip Augmentation](computer_vision/flip_augmentation.py)
* [Haralick Descriptors](computer_vision/haralick_descriptors.py)
* [Harris Corner](computer_vision/harris_corner.py)
* [Horn Schunck](computer_vision/horn_schunck.py)
* [Mean Threshold](computer_vision/mean_threshold.py)
* [Mosaic Augmentation](computer_vision/mosaic_augmentation.py)
* [Pooling Functions](computer_vision/pooling_functions.py)
## Conversions
* [Astronomical Length Scale Conversion](conversions/astronomical_length_scale_conversion.py)
* [Binary To Decimal](conversions/binary_to_decimal.py)
* [Binary To Hexadecimal](conversions/binary_to_hexadecimal.py)
* [Binary To Octal](conversions/binary_to_octal.py)
* [Convert Number To Words](conversions/convert_number_to_words.py)
* [Decimal To Any](conversions/decimal_to_any.py)
* [Decimal To Binary](conversions/decimal_to_binary.py)
* [Decimal To Hexadecimal](conversions/decimal_to_hexadecimal.py)
* [Decimal To Octal](conversions/decimal_to_octal.py)
* [Energy Conversions](conversions/energy_conversions.py)
* [Excel Title To Column](conversions/excel_title_to_column.py)
* [Hex To Bin](conversions/hex_to_bin.py)
* [Hexadecimal To Decimal](conversions/hexadecimal_to_decimal.py)
* [Length Conversion](conversions/length_conversion.py)
* [Molecular Chemistry](conversions/molecular_chemistry.py)
* [Octal To Binary](conversions/octal_to_binary.py)
* [Octal To Decimal](conversions/octal_to_decimal.py)
* [Prefix Conversions](conversions/prefix_conversions.py)
* [Prefix Conversions String](conversions/prefix_conversions_string.py)
* [Pressure Conversions](conversions/pressure_conversions.py)
* [Rgb Hsv Conversion](conversions/rgb_hsv_conversion.py)
* [Roman Numerals](conversions/roman_numerals.py)
* [Speed Conversions](conversions/speed_conversions.py)
* [Temperature Conversions](conversions/temperature_conversions.py)
* [Volume Conversions](conversions/volume_conversions.py)
* [Weight Conversion](conversions/weight_conversion.py)
## Data Structures
* Arrays
* [Permutations](data_structures/arrays/permutations.py)
* [Prefix Sum](data_structures/arrays/prefix_sum.py)
* [Product Sum](data_structures/arrays/product_sum.py)
* Binary Tree
* [Avl Tree](data_structures/binary_tree/avl_tree.py)
* [Basic Binary Tree](data_structures/binary_tree/basic_binary_tree.py)
* [Binary Search Tree](data_structures/binary_tree/binary_search_tree.py)
* [Binary Search Tree Recursive](data_structures/binary_tree/binary_search_tree_recursive.py)
* [Binary Tree Mirror](data_structures/binary_tree/binary_tree_mirror.py)
* [Binary Tree Node Sum](data_structures/binary_tree/binary_tree_node_sum.py)
* [Binary Tree Path Sum](data_structures/binary_tree/binary_tree_path_sum.py)
* [Binary Tree Traversals](data_structures/binary_tree/binary_tree_traversals.py)
* [Diff Views Of Binary Tree](data_structures/binary_tree/diff_views_of_binary_tree.py)
* [Distribute Coins](data_structures/binary_tree/distribute_coins.py)
* [Fenwick Tree](data_structures/binary_tree/fenwick_tree.py)
* [Inorder Tree Traversal 2022](data_structures/binary_tree/inorder_tree_traversal_2022.py)
* [Is Bst](data_structures/binary_tree/is_bst.py)
* [Lazy Segment Tree](data_structures/binary_tree/lazy_segment_tree.py)
* [Lowest Common Ancestor](data_structures/binary_tree/lowest_common_ancestor.py)
* [Maximum Fenwick Tree](data_structures/binary_tree/maximum_fenwick_tree.py)
* [Merge Two Binary Trees](data_structures/binary_tree/merge_two_binary_trees.py)
* [Non Recursive Segment Tree](data_structures/binary_tree/non_recursive_segment_tree.py)
* [Number Of Possible Binary Trees](data_structures/binary_tree/number_of_possible_binary_trees.py)
* [Red Black Tree](data_structures/binary_tree/red_black_tree.py)
* [Segment Tree](data_structures/binary_tree/segment_tree.py)
* [Segment Tree Other](data_structures/binary_tree/segment_tree_other.py)
* [Treap](data_structures/binary_tree/treap.py)
* [Wavelet Tree](data_structures/binary_tree/wavelet_tree.py)
* Disjoint Set
* [Alternate Disjoint Set](data_structures/disjoint_set/alternate_disjoint_set.py)
* [Disjoint Set](data_structures/disjoint_set/disjoint_set.py)
* Hashing
* [Bloom Filter](data_structures/hashing/bloom_filter.py)
* [Double Hash](data_structures/hashing/double_hash.py)
* [Hash Map](data_structures/hashing/hash_map.py)
* [Hash Table](data_structures/hashing/hash_table.py)
* [Hash Table With Linked List](data_structures/hashing/hash_table_with_linked_list.py)
* Number Theory
* [Prime Numbers](data_structures/hashing/number_theory/prime_numbers.py)
* [Quadratic Probing](data_structures/hashing/quadratic_probing.py)
* Tests
* [Test Hash Map](data_structures/hashing/tests/test_hash_map.py)
* Heap
* [Binomial Heap](data_structures/heap/binomial_heap.py)
* [Heap](data_structures/heap/heap.py)
* [Heap Generic](data_structures/heap/heap_generic.py)
* [Max Heap](data_structures/heap/max_heap.py)
* [Min Heap](data_structures/heap/min_heap.py)
* [Randomized Heap](data_structures/heap/randomized_heap.py)
* [Skew Heap](data_structures/heap/skew_heap.py)
* Linked List
* [Circular Linked List](data_structures/linked_list/circular_linked_list.py)
* [Deque Doubly](data_structures/linked_list/deque_doubly.py)
* [Doubly Linked List](data_structures/linked_list/doubly_linked_list.py)
* [Doubly Linked List Two](data_structures/linked_list/doubly_linked_list_two.py)
* [From Sequence](data_structures/linked_list/from_sequence.py)
* [Has Loop](data_structures/linked_list/has_loop.py)
* [Is Palindrome](data_structures/linked_list/is_palindrome.py)
* [Merge Two Lists](data_structures/linked_list/merge_two_lists.py)
* [Middle Element Of Linked List](data_structures/linked_list/middle_element_of_linked_list.py)
* [Print Reverse](data_structures/linked_list/print_reverse.py)
* [Reverse K Group](data_structures/linked_list/reverse_k_group.py)
* [Rotate To The Right](data_structures/linked_list/rotate_to_the_right.py)
* [Singly Linked List](data_structures/linked_list/singly_linked_list.py)
* [Skip List](data_structures/linked_list/skip_list.py)
* [Swap Nodes](data_structures/linked_list/swap_nodes.py)
* Queue
* [Circular Queue](data_structures/queue/circular_queue.py)
* [Circular Queue Linked List](data_structures/queue/circular_queue_linked_list.py)
* [Double Ended Queue](data_structures/queue/double_ended_queue.py)
* [Linked Queue](data_structures/queue/linked_queue.py)
* [Priority Queue Using List](data_structures/queue/priority_queue_using_list.py)
* [Queue By List](data_structures/queue/queue_by_list.py)
* [Queue By Two Stacks](data_structures/queue/queue_by_two_stacks.py)
* [Queue On Pseudo Stack](data_structures/queue/queue_on_pseudo_stack.py)
* Stacks
* [Balanced Parentheses](data_structures/stacks/balanced_parentheses.py)
* [Dijkstras Two Stack Algorithm](data_structures/stacks/dijkstras_two_stack_algorithm.py)
* [Infix To Postfix Conversion](data_structures/stacks/infix_to_postfix_conversion.py)
* [Infix To Prefix Conversion](data_structures/stacks/infix_to_prefix_conversion.py)
* [Next Greater Element](data_structures/stacks/next_greater_element.py)
* [Postfix Evaluation](data_structures/stacks/postfix_evaluation.py)
* [Prefix Evaluation](data_structures/stacks/prefix_evaluation.py)
* [Stack](data_structures/stacks/stack.py)
* [Stack With Doubly Linked List](data_structures/stacks/stack_with_doubly_linked_list.py)
* [Stack With Singly Linked List](data_structures/stacks/stack_with_singly_linked_list.py)
* [Stock Span Problem](data_structures/stacks/stock_span_problem.py)
* Trie
* [Radix Tree](data_structures/trie/radix_tree.py)
* [Trie](data_structures/trie/trie.py)
## Digital Image Processing
* [Change Brightness](digital_image_processing/change_brightness.py)
* [Change Contrast](digital_image_processing/change_contrast.py)
* [Convert To Negative](digital_image_processing/convert_to_negative.py)
* Dithering
* [Burkes](digital_image_processing/dithering/burkes.py)
* Edge Detection
* [Canny](digital_image_processing/edge_detection/canny.py)
* Filters
* [Bilateral Filter](digital_image_processing/filters/bilateral_filter.py)
* [Convolve](digital_image_processing/filters/convolve.py)
* [Gabor Filter](digital_image_processing/filters/gabor_filter.py)
* [Gaussian Filter](digital_image_processing/filters/gaussian_filter.py)
* [Local Binary Pattern](digital_image_processing/filters/local_binary_pattern.py)
* [Median Filter](digital_image_processing/filters/median_filter.py)
* [Sobel Filter](digital_image_processing/filters/sobel_filter.py)
* Histogram Equalization
* [Histogram Stretch](digital_image_processing/histogram_equalization/histogram_stretch.py)
* [Index Calculation](digital_image_processing/index_calculation.py)
* Morphological Operations
* [Dilation Operation](digital_image_processing/morphological_operations/dilation_operation.py)
* [Erosion Operation](digital_image_processing/morphological_operations/erosion_operation.py)
* Resize
* [Resize](digital_image_processing/resize/resize.py)
* Rotation
* [Rotation](digital_image_processing/rotation/rotation.py)
* [Sepia](digital_image_processing/sepia.py)
* [Test Digital Image Processing](digital_image_processing/test_digital_image_processing.py)
## Divide And Conquer
* [Closest Pair Of Points](divide_and_conquer/closest_pair_of_points.py)
* [Convex Hull](divide_and_conquer/convex_hull.py)
* [Heaps Algorithm](divide_and_conquer/heaps_algorithm.py)
* [Heaps Algorithm Iterative](divide_and_conquer/heaps_algorithm_iterative.py)
* [Inversions](divide_and_conquer/inversions.py)
* [Kth Order Statistic](divide_and_conquer/kth_order_statistic.py)
* [Max Difference Pair](divide_and_conquer/max_difference_pair.py)
* [Max Subarray](divide_and_conquer/max_subarray.py)
* [Mergesort](divide_and_conquer/mergesort.py)
* [Peak](divide_and_conquer/peak.py)
* [Power](divide_and_conquer/power.py)
* [Strassen Matrix Multiplication](divide_and_conquer/strassen_matrix_multiplication.py)
## Dynamic Programming
* [Abbreviation](dynamic_programming/abbreviation.py)
* [All Construct](dynamic_programming/all_construct.py)
* [Bitmask](dynamic_programming/bitmask.py)
* [Catalan Numbers](dynamic_programming/catalan_numbers.py)
* [Climbing Stairs](dynamic_programming/climbing_stairs.py)
* [Combination Sum Iv](dynamic_programming/combination_sum_iv.py)
* [Edit Distance](dynamic_programming/edit_distance.py)
* [Factorial](dynamic_programming/factorial.py)
* [Fast Fibonacci](dynamic_programming/fast_fibonacci.py)
* [Fibonacci](dynamic_programming/fibonacci.py)
* [Fizz Buzz](dynamic_programming/fizz_buzz.py)
* [Floyd Warshall](dynamic_programming/floyd_warshall.py)
* [Integer Partition](dynamic_programming/integer_partition.py)
* [Iterating Through Submasks](dynamic_programming/iterating_through_submasks.py)
* [K Means Clustering Tensorflow](dynamic_programming/k_means_clustering_tensorflow.py)
* [Knapsack](dynamic_programming/knapsack.py)
* [Longest Common Subsequence](dynamic_programming/longest_common_subsequence.py)
* [Longest Common Substring](dynamic_programming/longest_common_substring.py)
* [Longest Increasing Subsequence](dynamic_programming/longest_increasing_subsequence.py)
* [Longest Increasing Subsequence O(Nlogn)](dynamic_programming/longest_increasing_subsequence_o(nlogn).py)
* [Longest Sub Array](dynamic_programming/longest_sub_array.py)
* [Matrix Chain Order](dynamic_programming/matrix_chain_order.py)
* [Max Non Adjacent Sum](dynamic_programming/max_non_adjacent_sum.py)
* [Max Product Subarray](dynamic_programming/max_product_subarray.py)
* [Max Subarray Sum](dynamic_programming/max_subarray_sum.py)
* [Min Distance Up Bottom](dynamic_programming/min_distance_up_bottom.py)
* [Minimum Coin Change](dynamic_programming/minimum_coin_change.py)
* [Minimum Cost Path](dynamic_programming/minimum_cost_path.py)
* [Minimum Partition](dynamic_programming/minimum_partition.py)
* [Minimum Size Subarray Sum](dynamic_programming/minimum_size_subarray_sum.py)
* [Minimum Squares To Represent A Number](dynamic_programming/minimum_squares_to_represent_a_number.py)
* [Minimum Steps To One](dynamic_programming/minimum_steps_to_one.py)
* [Minimum Tickets Cost](dynamic_programming/minimum_tickets_cost.py)
* [Optimal Binary Search Tree](dynamic_programming/optimal_binary_search_tree.py)
* [Palindrome Partitioning](dynamic_programming/palindrome_partitioning.py)
* [Regex Match](dynamic_programming/regex_match.py)
* [Rod Cutting](dynamic_programming/rod_cutting.py)
* [Smith Waterman](dynamic_programming/smith_waterman.py)
* [Subset Generation](dynamic_programming/subset_generation.py)
* [Sum Of Subset](dynamic_programming/sum_of_subset.py)
* [Tribonacci](dynamic_programming/tribonacci.py)
* [Viterbi](dynamic_programming/viterbi.py)
* [Word Break](dynamic_programming/word_break.py)
## Electronics
* [Apparent Power](electronics/apparent_power.py)
* [Builtin Voltage](electronics/builtin_voltage.py)
* [Carrier Concentration](electronics/carrier_concentration.py)
* [Circular Convolution](electronics/circular_convolution.py)
* [Coulombs Law](electronics/coulombs_law.py)
* [Electric Conductivity](electronics/electric_conductivity.py)
* [Electric Power](electronics/electric_power.py)
* [Electrical Impedance](electronics/electrical_impedance.py)
* [Ind Reactance](electronics/ind_reactance.py)
* [Ohms Law](electronics/ohms_law.py)
* [Real And Reactive Power](electronics/real_and_reactive_power.py)
* [Resistor Equivalence](electronics/resistor_equivalence.py)
* [Resonant Frequency](electronics/resonant_frequency.py)
## File Transfer
* [Receive File](file_transfer/receive_file.py)
* [Send File](file_transfer/send_file.py)
* Tests
* [Test Send File](file_transfer/tests/test_send_file.py)
## Financial
* [Equated Monthly Installments](financial/equated_monthly_installments.py)
* [Interest](financial/interest.py)
* [Present Value](financial/present_value.py)
* [Price Plus Tax](financial/price_plus_tax.py)
## Fractals
* [Julia Sets](fractals/julia_sets.py)
* [Koch Snowflake](fractals/koch_snowflake.py)
* [Mandelbrot](fractals/mandelbrot.py)
* [Sierpinski Triangle](fractals/sierpinski_triangle.py)
## Fuzzy Logic
* [Fuzzy Operations](fuzzy_logic/fuzzy_operations.py)
## Genetic Algorithm
* [Basic String](genetic_algorithm/basic_string.py)
## Geodesy
* [Haversine Distance](geodesy/haversine_distance.py)
* [Lamberts Ellipsoidal Distance](geodesy/lamberts_ellipsoidal_distance.py)
## Graphics
* [Bezier Curve](graphics/bezier_curve.py)
* [Vector3 For 2D Rendering](graphics/vector3_for_2d_rendering.py)
## Graphs
* [A Star](graphs/a_star.py)
* [Articulation Points](graphs/articulation_points.py)
* [Basic Graphs](graphs/basic_graphs.py)
* [Bellman Ford](graphs/bellman_ford.py)
* [Bi Directional Dijkstra](graphs/bi_directional_dijkstra.py)
* [Bidirectional A Star](graphs/bidirectional_a_star.py)
* [Bidirectional Breadth First Search](graphs/bidirectional_breadth_first_search.py)
* [Boruvka](graphs/boruvka.py)
* [Breadth First Search](graphs/breadth_first_search.py)
* [Breadth First Search 2](graphs/breadth_first_search_2.py)
* [Breadth First Search Shortest Path](graphs/breadth_first_search_shortest_path.py)
* [Breadth First Search Shortest Path 2](graphs/breadth_first_search_shortest_path_2.py)
* [Breadth First Search Zero One Shortest Path](graphs/breadth_first_search_zero_one_shortest_path.py)
* [Check Bipartite Graph Bfs](graphs/check_bipartite_graph_bfs.py)
* [Check Bipartite Graph Dfs](graphs/check_bipartite_graph_dfs.py)
* [Check Cycle](graphs/check_cycle.py)
* [Connected Components](graphs/connected_components.py)
* [Depth First Search](graphs/depth_first_search.py)
* [Depth First Search 2](graphs/depth_first_search_2.py)
* [Dijkstra](graphs/dijkstra.py)
* [Dijkstra 2](graphs/dijkstra_2.py)
* [Dijkstra Algorithm](graphs/dijkstra_algorithm.py)
* [Dijkstra Alternate](graphs/dijkstra_alternate.py)
* [Dijkstra Binary Grid](graphs/dijkstra_binary_grid.py)
* [Dinic](graphs/dinic.py)
* [Directed And Undirected (Weighted) Graph](graphs/directed_and_undirected_(weighted)_graph.py)
* [Edmonds Karp Multiple Source And Sink](graphs/edmonds_karp_multiple_source_and_sink.py)
* [Eulerian Path And Circuit For Undirected Graph](graphs/eulerian_path_and_circuit_for_undirected_graph.py)
* [Even Tree](graphs/even_tree.py)
* [Finding Bridges](graphs/finding_bridges.py)
* [Frequent Pattern Graph Miner](graphs/frequent_pattern_graph_miner.py)
* [G Topological Sort](graphs/g_topological_sort.py)
* [Gale Shapley Bigraph](graphs/gale_shapley_bigraph.py)
* [Graph Adjacency List](graphs/graph_adjacency_list.py)
* [Graph Adjacency Matrix](graphs/graph_adjacency_matrix.py)
* [Graph List](graphs/graph_list.py)
* [Graphs Floyd Warshall](graphs/graphs_floyd_warshall.py)
* [Greedy Best First](graphs/greedy_best_first.py)
* [Greedy Min Vertex Cover](graphs/greedy_min_vertex_cover.py)
* [Kahns Algorithm Long](graphs/kahns_algorithm_long.py)
* [Kahns Algorithm Topo](graphs/kahns_algorithm_topo.py)
* [Karger](graphs/karger.py)
* [Markov Chain](graphs/markov_chain.py)
* [Matching Min Vertex Cover](graphs/matching_min_vertex_cover.py)
* [Minimum Path Sum](graphs/minimum_path_sum.py)
* [Minimum Spanning Tree Boruvka](graphs/minimum_spanning_tree_boruvka.py)
* [Minimum Spanning Tree Kruskal](graphs/minimum_spanning_tree_kruskal.py)
* [Minimum Spanning Tree Kruskal2](graphs/minimum_spanning_tree_kruskal2.py)
* [Minimum Spanning Tree Prims](graphs/minimum_spanning_tree_prims.py)
* [Minimum Spanning Tree Prims2](graphs/minimum_spanning_tree_prims2.py)
* [Multi Heuristic Astar](graphs/multi_heuristic_astar.py)
* [Page Rank](graphs/page_rank.py)
* [Prim](graphs/prim.py)
* [Random Graph Generator](graphs/random_graph_generator.py)
* [Scc Kosaraju](graphs/scc_kosaraju.py)
* [Strongly Connected Components](graphs/strongly_connected_components.py)
* [Tarjans Scc](graphs/tarjans_scc.py)
* Tests
* [Test Min Spanning Tree Kruskal](graphs/tests/test_min_spanning_tree_kruskal.py)
* [Test Min Spanning Tree Prim](graphs/tests/test_min_spanning_tree_prim.py)
## Greedy Methods
* [Fractional Knapsack](greedy_methods/fractional_knapsack.py)
* [Fractional Knapsack 2](greedy_methods/fractional_knapsack_2.py)
* [Minimum Waiting Time](greedy_methods/minimum_waiting_time.py)
* [Optimal Merge Pattern](greedy_methods/optimal_merge_pattern.py)
## Hashes
* [Adler32](hashes/adler32.py)
* [Chaos Machine](hashes/chaos_machine.py)
* [Djb2](hashes/djb2.py)
* [Elf](hashes/elf.py)
* [Enigma Machine](hashes/enigma_machine.py)
* [Hamming Code](hashes/hamming_code.py)
* [Luhn](hashes/luhn.py)
* [Md5](hashes/md5.py)
* [Sdbm](hashes/sdbm.py)
* [Sha1](hashes/sha1.py)
* [Sha256](hashes/sha256.py)
## Knapsack
* [Greedy Knapsack](knapsack/greedy_knapsack.py)
* [Knapsack](knapsack/knapsack.py)
* [Recursive Approach Knapsack](knapsack/recursive_approach_knapsack.py)
* Tests
* [Test Greedy Knapsack](knapsack/tests/test_greedy_knapsack.py)
* [Test Knapsack](knapsack/tests/test_knapsack.py)
## Linear Algebra
* Src
* [Conjugate Gradient](linear_algebra/src/conjugate_gradient.py)
* [Lib](linear_algebra/src/lib.py)
* [Polynom For Points](linear_algebra/src/polynom_for_points.py)
* [Power Iteration](linear_algebra/src/power_iteration.py)
* [Rank Of Matrix](linear_algebra/src/rank_of_matrix.py)
* [Rayleigh Quotient](linear_algebra/src/rayleigh_quotient.py)
* [Schur Complement](linear_algebra/src/schur_complement.py)
* [Test Linear Algebra](linear_algebra/src/test_linear_algebra.py)
* [Transformations 2D](linear_algebra/src/transformations_2d.py)
## Linear Programming
* [Simplex](linear_programming/simplex.py)
## Machine Learning
* [Astar](machine_learning/astar.py)
* [Data Transformations](machine_learning/data_transformations.py)
* [Decision Tree](machine_learning/decision_tree.py)
* [Dimensionality Reduction](machine_learning/dimensionality_reduction.py)
* Forecasting
* [Run](machine_learning/forecasting/run.py)
* [Gradient Descent](machine_learning/gradient_descent.py)
* [K Means Clust](machine_learning/k_means_clust.py)
* [K Nearest Neighbours](machine_learning/k_nearest_neighbours.py)
* [Linear Discriminant Analysis](machine_learning/linear_discriminant_analysis.py)
* [Linear Regression](machine_learning/linear_regression.py)
* Local Weighted Learning
* [Local Weighted Learning](machine_learning/local_weighted_learning/local_weighted_learning.py)
* [Logistic Regression](machine_learning/logistic_regression.py)
* Lstm
* [Lstm Prediction](machine_learning/lstm/lstm_prediction.py)
* [Mfcc](machine_learning/mfcc.py)
* [Multilayer Perceptron Classifier](machine_learning/multilayer_perceptron_classifier.py)
* [Polynomial Regression](machine_learning/polynomial_regression.py)
* [Scoring Functions](machine_learning/scoring_functions.py)
* [Self Organizing Map](machine_learning/self_organizing_map.py)
* [Sequential Minimum Optimization](machine_learning/sequential_minimum_optimization.py)
* [Similarity Search](machine_learning/similarity_search.py)
* [Support Vector Machines](machine_learning/support_vector_machines.py)
* [Word Frequency Functions](machine_learning/word_frequency_functions.py)
* [Xgboost Classifier](machine_learning/xgboost_classifier.py)
* [Xgboost Regressor](machine_learning/xgboost_regressor.py)
## Maths
* [Abs](maths/abs.py)
* [Addition Without Arithmetic](maths/addition_without_arithmetic.py)
* [Aliquot Sum](maths/aliquot_sum.py)
* [Allocation Number](maths/allocation_number.py)
* [Arc Length](maths/arc_length.py)
* [Area](maths/area.py)
* [Area Under Curve](maths/area_under_curve.py)
* [Armstrong Numbers](maths/armstrong_numbers.py)
* [Automorphic Number](maths/automorphic_number.py)
* [Average Absolute Deviation](maths/average_absolute_deviation.py)
* [Average Mean](maths/average_mean.py)
* [Average Median](maths/average_median.py)
* [Average Mode](maths/average_mode.py)
* [Bailey Borwein Plouffe](maths/bailey_borwein_plouffe.py)
* [Basic Maths](maths/basic_maths.py)
* [Binary Exp Mod](maths/binary_exp_mod.py)
* [Binary Exponentiation](maths/binary_exponentiation.py)
* [Binary Exponentiation 2](maths/binary_exponentiation_2.py)
* [Binary Exponentiation 3](maths/binary_exponentiation_3.py)
* [Binomial Coefficient](maths/binomial_coefficient.py)
* [Binomial Distribution](maths/binomial_distribution.py)
* [Bisection](maths/bisection.py)
* [Carmichael Number](maths/carmichael_number.py)
* [Catalan Number](maths/catalan_number.py)
* [Ceil](maths/ceil.py)
* [Check Polygon](maths/check_polygon.py)
* [Chudnovsky Algorithm](maths/chudnovsky_algorithm.py)
* [Collatz Sequence](maths/collatz_sequence.py)
* [Combinations](maths/combinations.py)
* [Continued Fraction](maths/continued_fraction.py)
* [Decimal Isolate](maths/decimal_isolate.py)
* [Decimal To Fraction](maths/decimal_to_fraction.py)
* [Dodecahedron](maths/dodecahedron.py)
* [Double Factorial Iterative](maths/double_factorial_iterative.py)
* [Double Factorial Recursive](maths/double_factorial_recursive.py)
* [Dual Number Automatic Differentiation](maths/dual_number_automatic_differentiation.py)
* [Entropy](maths/entropy.py)
* [Euclidean Distance](maths/euclidean_distance.py)
* [Euler Method](maths/euler_method.py)
* [Euler Modified](maths/euler_modified.py)
* [Eulers Totient](maths/eulers_totient.py)
* [Extended Euclidean Algorithm](maths/extended_euclidean_algorithm.py)
* [Factorial](maths/factorial.py)
* [Factors](maths/factors.py)
* [Fermat Little Theorem](maths/fermat_little_theorem.py)
* [Fibonacci](maths/fibonacci.py)
* [Find Max](maths/find_max.py)
* [Find Min](maths/find_min.py)
* [Floor](maths/floor.py)
* [Gamma](maths/gamma.py)
* [Gamma Recursive](maths/gamma_recursive.py)
* [Gaussian](maths/gaussian.py)
* [Gaussian Error Linear Unit](maths/gaussian_error_linear_unit.py)
* [Gcd Of N Numbers](maths/gcd_of_n_numbers.py)
* [Greatest Common Divisor](maths/greatest_common_divisor.py)
* [Greedy Coin Change](maths/greedy_coin_change.py)
* [Hamming Numbers](maths/hamming_numbers.py)
* [Hardy Ramanujanalgo](maths/hardy_ramanujanalgo.py)
* [Harshad Numbers](maths/harshad_numbers.py)
* [Hexagonal Number](maths/hexagonal_number.py)
* [Integration By Simpson Approx](maths/integration_by_simpson_approx.py)
* [Interquartile Range](maths/interquartile_range.py)
* [Is Int Palindrome](maths/is_int_palindrome.py)
* [Is Ip V4 Address Valid](maths/is_ip_v4_address_valid.py)
* [Is Square Free](maths/is_square_free.py)
* [Jaccard Similarity](maths/jaccard_similarity.py)
* [Juggler Sequence](maths/juggler_sequence.py)
* [Karatsuba](maths/karatsuba.py)
* [Krishnamurthy Number](maths/krishnamurthy_number.py)
* [Kth Lexicographic Permutation](maths/kth_lexicographic_permutation.py)
* [Largest Of Very Large Numbers](maths/largest_of_very_large_numbers.py)
* [Least Common Multiple](maths/least_common_multiple.py)
* [Line Length](maths/line_length.py)
* [Liouville Lambda](maths/liouville_lambda.py)
* [Lucas Lehmer Primality Test](maths/lucas_lehmer_primality_test.py)
* [Lucas Series](maths/lucas_series.py)
* [Maclaurin Series](maths/maclaurin_series.py)
* [Manhattan Distance](maths/manhattan_distance.py)
* [Matrix Exponentiation](maths/matrix_exponentiation.py)
* [Max Sum Sliding Window](maths/max_sum_sliding_window.py)
* [Median Of Two Arrays](maths/median_of_two_arrays.py)
* [Miller Rabin](maths/miller_rabin.py)
* [Mobius Function](maths/mobius_function.py)
* [Modular Exponential](maths/modular_exponential.py)
* [Monte Carlo](maths/monte_carlo.py)
* [Monte Carlo Dice](maths/monte_carlo_dice.py)
* [Nevilles Method](maths/nevilles_method.py)
* [Newton Raphson](maths/newton_raphson.py)
* [Number Of Digits](maths/number_of_digits.py)
* [Numerical Integration](maths/numerical_integration.py)
* [Odd Sieve](maths/odd_sieve.py)
* [Perfect Cube](maths/perfect_cube.py)
* [Perfect Number](maths/perfect_number.py)
* [Perfect Square](maths/perfect_square.py)
* [Persistence](maths/persistence.py)
* [Pi Generator](maths/pi_generator.py)
* [Pi Monte Carlo Estimation](maths/pi_monte_carlo_estimation.py)
* [Points Are Collinear 3D](maths/points_are_collinear_3d.py)
* [Pollard Rho](maths/pollard_rho.py)
* [Polygonal Numbers](maths/polygonal_numbers.py)
* [Polynomial Evaluation](maths/polynomial_evaluation.py)
* Polynomials
* [Single Indeterminate Operations](maths/polynomials/single_indeterminate_operations.py)
* [Power Using Recursion](maths/power_using_recursion.py)
* [Prime Check](maths/prime_check.py)
* [Prime Factors](maths/prime_factors.py)
* [Prime Numbers](maths/prime_numbers.py)
* [Prime Sieve Eratosthenes](maths/prime_sieve_eratosthenes.py)
* [Primelib](maths/primelib.py)
* [Print Multiplication Table](maths/print_multiplication_table.py)
* [Pronic Number](maths/pronic_number.py)
* [Proth Number](maths/proth_number.py)
* [Pythagoras](maths/pythagoras.py)
* [Qr Decomposition](maths/qr_decomposition.py)
* [Quadratic Equations Complex Numbers](maths/quadratic_equations_complex_numbers.py)
* [Radians](maths/radians.py)
* [Radix2 Fft](maths/radix2_fft.py)
* [Relu](maths/relu.py)
* [Remove Digit](maths/remove_digit.py)
* [Runge Kutta](maths/runge_kutta.py)
* [Segmented Sieve](maths/segmented_sieve.py)
* Series
* [Arithmetic](maths/series/arithmetic.py)
* [Geometric](maths/series/geometric.py)
* [Geometric Series](maths/series/geometric_series.py)
* [Harmonic](maths/series/harmonic.py)
* [Harmonic Series](maths/series/harmonic_series.py)
* [Hexagonal Numbers](maths/series/hexagonal_numbers.py)
* [P Series](maths/series/p_series.py)
* [Sieve Of Eratosthenes](maths/sieve_of_eratosthenes.py)
* [Sigmoid](maths/sigmoid.py)
* [Sigmoid Linear Unit](maths/sigmoid_linear_unit.py)
* [Signum](maths/signum.py)
* [Simpson Rule](maths/simpson_rule.py)
* [Simultaneous Linear Equation Solver](maths/simultaneous_linear_equation_solver.py)
* [Sin](maths/sin.py)
* [Sock Merchant](maths/sock_merchant.py)
* [Softmax](maths/softmax.py)
* [Square Root](maths/square_root.py)
* [Sum Of Arithmetic Series](maths/sum_of_arithmetic_series.py)
* [Sum Of Digits](maths/sum_of_digits.py)
* [Sum Of Geometric Progression](maths/sum_of_geometric_progression.py)
* [Sum Of Harmonic Series](maths/sum_of_harmonic_series.py)
* [Sumset](maths/sumset.py)
* [Sylvester Sequence](maths/sylvester_sequence.py)
* [Tanh](maths/tanh.py)
* [Test Prime Check](maths/test_prime_check.py)
* [Three Sum](maths/three_sum.py)
* [Trapezoidal Rule](maths/trapezoidal_rule.py)
* [Triplet Sum](maths/triplet_sum.py)
* [Twin Prime](maths/twin_prime.py)
* [Two Pointer](maths/two_pointer.py)
* [Two Sum](maths/two_sum.py)
* [Ugly Numbers](maths/ugly_numbers.py)
* [Volume](maths/volume.py)
* [Weird Number](maths/weird_number.py)
* [Zellers Congruence](maths/zellers_congruence.py)
## Matrix
* [Binary Search Matrix](matrix/binary_search_matrix.py)
* [Count Islands In Matrix](matrix/count_islands_in_matrix.py)
* [Count Negative Numbers In Sorted Matrix](matrix/count_negative_numbers_in_sorted_matrix.py)
* [Count Paths](matrix/count_paths.py)
* [Cramers Rule 2X2](matrix/cramers_rule_2x2.py)
* [Inverse Of Matrix](matrix/inverse_of_matrix.py)
* [Largest Square Area In Matrix](matrix/largest_square_area_in_matrix.py)
* [Matrix Class](matrix/matrix_class.py)
* [Matrix Operation](matrix/matrix_operation.py)
* [Max Area Of Island](matrix/max_area_of_island.py)
* [Nth Fibonacci Using Matrix Exponentiation](matrix/nth_fibonacci_using_matrix_exponentiation.py)
* [Pascal Triangle](matrix/pascal_triangle.py)
* [Rotate Matrix](matrix/rotate_matrix.py)
* [Searching In Sorted Matrix](matrix/searching_in_sorted_matrix.py)
* [Sherman Morrison](matrix/sherman_morrison.py)
* [Spiral Print](matrix/spiral_print.py)
* Tests
* [Test Matrix Operation](matrix/tests/test_matrix_operation.py)
## Networking Flow
* [Ford Fulkerson](networking_flow/ford_fulkerson.py)
* [Minimum Cut](networking_flow/minimum_cut.py)
## Neural Network
* [2 Hidden Layers Neural Network](neural_network/2_hidden_layers_neural_network.py)
* Activation Functions
* [Exponential Linear Unit](neural_network/activation_functions/exponential_linear_unit.py)
* [Leaky Rectified Linear Unit](neural_network/activation_functions/leaky_rectified_linear_unit.py)
* [Scaled Exponential Linear Unit](neural_network/activation_functions/scaled_exponential_linear_unit.py)
* [Back Propagation Neural Network](neural_network/back_propagation_neural_network.py)
* [Convolution Neural Network](neural_network/convolution_neural_network.py)
* [Perceptron](neural_network/perceptron.py)
* [Simple Neural Network](neural_network/simple_neural_network.py)
## Other
* [Activity Selection](other/activity_selection.py)
* [Alternative List Arrange](other/alternative_list_arrange.py)
* [Davisb Putnamb Logemannb Loveland](other/davisb_putnamb_logemannb_loveland.py)
* [Dijkstra Bankers Algorithm](other/dijkstra_bankers_algorithm.py)
* [Doomsday](other/doomsday.py)
* [Fischer Yates Shuffle](other/fischer_yates_shuffle.py)
* [Gauss Easter](other/gauss_easter.py)
* [Graham Scan](other/graham_scan.py)
* [Greedy](other/greedy.py)
* [Guess The Number Search](other/guess_the_number_search.py)
* [H Index](other/h_index.py)
* [Least Recently Used](other/least_recently_used.py)
* [Lfu Cache](other/lfu_cache.py)
* [Linear Congruential Generator](other/linear_congruential_generator.py)
* [Lru Cache](other/lru_cache.py)
* [Magicdiamondpattern](other/magicdiamondpattern.py)
* [Maximum Subsequence](other/maximum_subsequence.py)
* [Nested Brackets](other/nested_brackets.py)
* [Number Container System](other/number_container_system.py)
* [Password](other/password.py)
* [Quine](other/quine.py)
* [Scoring Algorithm](other/scoring_algorithm.py)
* [Sdes](other/sdes.py)
* [Tower Of Hanoi](other/tower_of_hanoi.py)
* [Word Search](other/word_search.py)
## Physics
* [Altitude Pressure](physics/altitude_pressure.py)
* [Archimedes Principle](physics/archimedes_principle.py)
* [Basic Orbital Capture](physics/basic_orbital_capture.py)
* [Casimir Effect](physics/casimir_effect.py)
* [Centripetal Force](physics/centripetal_force.py)
* [Coulombs Law](physics/coulombs_law.py)
* [Grahams Law](physics/grahams_law.py)
* [Horizontal Projectile Motion](physics/horizontal_projectile_motion.py)
* [Hubble Parameter](physics/hubble_parameter.py)
* [Ideal Gas Law](physics/ideal_gas_law.py)
* [Kinetic Energy](physics/kinetic_energy.py)
* [Lorentz Transformation Four Vector](physics/lorentz_transformation_four_vector.py)
* [Malus Law](physics/malus_law.py)
* [N Body Simulation](physics/n_body_simulation.py)
* [Newtons Law Of Gravitation](physics/newtons_law_of_gravitation.py)
* [Newtons Second Law Of Motion](physics/newtons_second_law_of_motion.py)
* [Potential Energy](physics/potential_energy.py)
* [Rms Speed Of Molecule](physics/rms_speed_of_molecule.py)
* [Shear Stress](physics/shear_stress.py)
* [Speed Of Sound](physics/speed_of_sound.py)
## Project Euler
* Problem 001
* [Sol1](project_euler/problem_001/sol1.py)
* [Sol2](project_euler/problem_001/sol2.py)
* [Sol3](project_euler/problem_001/sol3.py)
* [Sol4](project_euler/problem_001/sol4.py)
* [Sol5](project_euler/problem_001/sol5.py)
* [Sol6](project_euler/problem_001/sol6.py)
* [Sol7](project_euler/problem_001/sol7.py)
* Problem 002
* [Sol1](project_euler/problem_002/sol1.py)
* [Sol2](project_euler/problem_002/sol2.py)
* [Sol3](project_euler/problem_002/sol3.py)
* [Sol4](project_euler/problem_002/sol4.py)
* [Sol5](project_euler/problem_002/sol5.py)
* Problem 003
* [Sol1](project_euler/problem_003/sol1.py)
* [Sol2](project_euler/problem_003/sol2.py)
* [Sol3](project_euler/problem_003/sol3.py)
* Problem 004
* [Sol1](project_euler/problem_004/sol1.py)
* [Sol2](project_euler/problem_004/sol2.py)
* Problem 005
* [Sol1](project_euler/problem_005/sol1.py)
* [Sol2](project_euler/problem_005/sol2.py)
* Problem 006
* [Sol1](project_euler/problem_006/sol1.py)
* [Sol2](project_euler/problem_006/sol2.py)
* [Sol3](project_euler/problem_006/sol3.py)
* [Sol4](project_euler/problem_006/sol4.py)
* Problem 007
* [Sol1](project_euler/problem_007/sol1.py)
* [Sol2](project_euler/problem_007/sol2.py)
* [Sol3](project_euler/problem_007/sol3.py)
* Problem 008
* [Sol1](project_euler/problem_008/sol1.py)
* [Sol2](project_euler/problem_008/sol2.py)
* [Sol3](project_euler/problem_008/sol3.py)
* Problem 009
* [Sol1](project_euler/problem_009/sol1.py)
* [Sol2](project_euler/problem_009/sol2.py)
* [Sol3](project_euler/problem_009/sol3.py)
* Problem 010
* [Sol1](project_euler/problem_010/sol1.py)
* [Sol2](project_euler/problem_010/sol2.py)
* [Sol3](project_euler/problem_010/sol3.py)
* Problem 011
* [Sol1](project_euler/problem_011/sol1.py)
* [Sol2](project_euler/problem_011/sol2.py)
* Problem 012
* [Sol1](project_euler/problem_012/sol1.py)
* [Sol2](project_euler/problem_012/sol2.py)
* Problem 013
* [Sol1](project_euler/problem_013/sol1.py)
* Problem 014
* [Sol1](project_euler/problem_014/sol1.py)
* [Sol2](project_euler/problem_014/sol2.py)
* Problem 015
* [Sol1](project_euler/problem_015/sol1.py)
* Problem 016
* [Sol1](project_euler/problem_016/sol1.py)
* [Sol2](project_euler/problem_016/sol2.py)
* Problem 017
* [Sol1](project_euler/problem_017/sol1.py)
* Problem 018
* [Solution](project_euler/problem_018/solution.py)
* Problem 019
* [Sol1](project_euler/problem_019/sol1.py)
* Problem 020
* [Sol1](project_euler/problem_020/sol1.py)
* [Sol2](project_euler/problem_020/sol2.py)
* [Sol3](project_euler/problem_020/sol3.py)
* [Sol4](project_euler/problem_020/sol4.py)
* Problem 021
* [Sol1](project_euler/problem_021/sol1.py)
* Problem 022
* [Sol1](project_euler/problem_022/sol1.py)
* [Sol2](project_euler/problem_022/sol2.py)
* Problem 023
* [Sol1](project_euler/problem_023/sol1.py)
* Problem 024
* [Sol1](project_euler/problem_024/sol1.py)
* Problem 025
* [Sol1](project_euler/problem_025/sol1.py)
* [Sol2](project_euler/problem_025/sol2.py)
* [Sol3](project_euler/problem_025/sol3.py)
* Problem 026
* [Sol1](project_euler/problem_026/sol1.py)
* Problem 027
* [Sol1](project_euler/problem_027/sol1.py)
* Problem 028
* [Sol1](project_euler/problem_028/sol1.py)
* Problem 029
* [Sol1](project_euler/problem_029/sol1.py)
* Problem 030
* [Sol1](project_euler/problem_030/sol1.py)
* Problem 031
* [Sol1](project_euler/problem_031/sol1.py)
* [Sol2](project_euler/problem_031/sol2.py)
* Problem 032
* [Sol32](project_euler/problem_032/sol32.py)
* Problem 033
* [Sol1](project_euler/problem_033/sol1.py)
* Problem 034
* [Sol1](project_euler/problem_034/sol1.py)
* Problem 035
* [Sol1](project_euler/problem_035/sol1.py)
* Problem 036
* [Sol1](project_euler/problem_036/sol1.py)
* Problem 037
* [Sol1](project_euler/problem_037/sol1.py)
* Problem 038
* [Sol1](project_euler/problem_038/sol1.py)
* Problem 039
* [Sol1](project_euler/problem_039/sol1.py)
* Problem 040
* [Sol1](project_euler/problem_040/sol1.py)
* Problem 041
* [Sol1](project_euler/problem_041/sol1.py)
* Problem 042
* [Solution42](project_euler/problem_042/solution42.py)
* Problem 043
* [Sol1](project_euler/problem_043/sol1.py)
* Problem 044
* [Sol1](project_euler/problem_044/sol1.py)
* Problem 045
* [Sol1](project_euler/problem_045/sol1.py)
* Problem 046
* [Sol1](project_euler/problem_046/sol1.py)
* Problem 047
* [Sol1](project_euler/problem_047/sol1.py)
* Problem 048
* [Sol1](project_euler/problem_048/sol1.py)
* Problem 049
* [Sol1](project_euler/problem_049/sol1.py)
* Problem 050
* [Sol1](project_euler/problem_050/sol1.py)
* Problem 051
* [Sol1](project_euler/problem_051/sol1.py)
* Problem 052
* [Sol1](project_euler/problem_052/sol1.py)
* Problem 053
* [Sol1](project_euler/problem_053/sol1.py)
* Problem 054
* [Sol1](project_euler/problem_054/sol1.py)
* [Test Poker Hand](project_euler/problem_054/test_poker_hand.py)
* Problem 055
* [Sol1](project_euler/problem_055/sol1.py)
* Problem 056
* [Sol1](project_euler/problem_056/sol1.py)
* Problem 057
* [Sol1](project_euler/problem_057/sol1.py)
* Problem 058
* [Sol1](project_euler/problem_058/sol1.py)
* Problem 059
* [Sol1](project_euler/problem_059/sol1.py)
* Problem 062
* [Sol1](project_euler/problem_062/sol1.py)
* Problem 063
* [Sol1](project_euler/problem_063/sol1.py)
* Problem 064
* [Sol1](project_euler/problem_064/sol1.py)
* Problem 065
* [Sol1](project_euler/problem_065/sol1.py)
* Problem 067
* [Sol1](project_euler/problem_067/sol1.py)
* [Sol2](project_euler/problem_067/sol2.py)
* Problem 068
* [Sol1](project_euler/problem_068/sol1.py)
* Problem 069
* [Sol1](project_euler/problem_069/sol1.py)
* Problem 070
* [Sol1](project_euler/problem_070/sol1.py)
* Problem 071
* [Sol1](project_euler/problem_071/sol1.py)
* Problem 072
* [Sol1](project_euler/problem_072/sol1.py)
* [Sol2](project_euler/problem_072/sol2.py)
* Problem 073
* [Sol1](project_euler/problem_073/sol1.py)
* Problem 074
* [Sol1](project_euler/problem_074/sol1.py)
* [Sol2](project_euler/problem_074/sol2.py)
* Problem 075
* [Sol1](project_euler/problem_075/sol1.py)
* Problem 076
* [Sol1](project_euler/problem_076/sol1.py)
* Problem 077
* [Sol1](project_euler/problem_077/sol1.py)
* Problem 078
* [Sol1](project_euler/problem_078/sol1.py)
* Problem 079
* [Sol1](project_euler/problem_079/sol1.py)
* Problem 080
* [Sol1](project_euler/problem_080/sol1.py)
* Problem 081
* [Sol1](project_euler/problem_081/sol1.py)
* Problem 082
* [Sol1](project_euler/problem_082/sol1.py)
* Problem 085
* [Sol1](project_euler/problem_085/sol1.py)
* Problem 086
* [Sol1](project_euler/problem_086/sol1.py)
* Problem 087
* [Sol1](project_euler/problem_087/sol1.py)
* Problem 089
* [Sol1](project_euler/problem_089/sol1.py)
* Problem 091
* [Sol1](project_euler/problem_091/sol1.py)
* Problem 092
* [Sol1](project_euler/problem_092/sol1.py)
* Problem 094
* [Sol1](project_euler/problem_094/sol1.py)
* Problem 097
* [Sol1](project_euler/problem_097/sol1.py)
* Problem 099
* [Sol1](project_euler/problem_099/sol1.py)
* Problem 100
* [Sol1](project_euler/problem_100/sol1.py)
* Problem 101
* [Sol1](project_euler/problem_101/sol1.py)
* Problem 102
* [Sol1](project_euler/problem_102/sol1.py)
* Problem 104
* [Sol1](project_euler/problem_104/sol1.py)
* Problem 107
* [Sol1](project_euler/problem_107/sol1.py)
* Problem 109
* [Sol1](project_euler/problem_109/sol1.py)
* Problem 112
* [Sol1](project_euler/problem_112/sol1.py)
* Problem 113
* [Sol1](project_euler/problem_113/sol1.py)
* Problem 114
* [Sol1](project_euler/problem_114/sol1.py)
* Problem 115
* [Sol1](project_euler/problem_115/sol1.py)
* Problem 116
* [Sol1](project_euler/problem_116/sol1.py)
* Problem 117
* [Sol1](project_euler/problem_117/sol1.py)
* Problem 119
* [Sol1](project_euler/problem_119/sol1.py)
* Problem 120
* [Sol1](project_euler/problem_120/sol1.py)
* Problem 121
* [Sol1](project_euler/problem_121/sol1.py)
* Problem 123
* [Sol1](project_euler/problem_123/sol1.py)
* Problem 125
* [Sol1](project_euler/problem_125/sol1.py)
* Problem 129
* [Sol1](project_euler/problem_129/sol1.py)
* Problem 131
* [Sol1](project_euler/problem_131/sol1.py)
* Problem 135
* [Sol1](project_euler/problem_135/sol1.py)
* Problem 144
* [Sol1](project_euler/problem_144/sol1.py)
* Problem 145
* [Sol1](project_euler/problem_145/sol1.py)
* Problem 173
* [Sol1](project_euler/problem_173/sol1.py)
* Problem 174
* [Sol1](project_euler/problem_174/sol1.py)
* Problem 180
* [Sol1](project_euler/problem_180/sol1.py)
* Problem 187
* [Sol1](project_euler/problem_187/sol1.py)
* Problem 188
* [Sol1](project_euler/problem_188/sol1.py)
* Problem 191
* [Sol1](project_euler/problem_191/sol1.py)
* Problem 203
* [Sol1](project_euler/problem_203/sol1.py)
* Problem 205
* [Sol1](project_euler/problem_205/sol1.py)
* Problem 206
* [Sol1](project_euler/problem_206/sol1.py)
* Problem 207
* [Sol1](project_euler/problem_207/sol1.py)
* Problem 234
* [Sol1](project_euler/problem_234/sol1.py)
* Problem 301
* [Sol1](project_euler/problem_301/sol1.py)
* Problem 493
* [Sol1](project_euler/problem_493/sol1.py)
* Problem 551
* [Sol1](project_euler/problem_551/sol1.py)
* Problem 587
* [Sol1](project_euler/problem_587/sol1.py)
* Problem 686
* [Sol1](project_euler/problem_686/sol1.py)
* Problem 800
* [Sol1](project_euler/problem_800/sol1.py)
## Quantum
* [Bb84](quantum/bb84.py)
* [Deutsch Jozsa](quantum/deutsch_jozsa.py)
* [Half Adder](quantum/half_adder.py)
* [Not Gate](quantum/not_gate.py)
* [Q Fourier Transform](quantum/q_fourier_transform.py)
* [Q Full Adder](quantum/q_full_adder.py)
* [Quantum Entanglement](quantum/quantum_entanglement.py)
* [Quantum Teleportation](quantum/quantum_teleportation.py)
* [Ripple Adder Classic](quantum/ripple_adder_classic.py)
* [Single Qubit Measure](quantum/single_qubit_measure.py)
* [Superdense Coding](quantum/superdense_coding.py)
## Scheduling
* [First Come First Served](scheduling/first_come_first_served.py)
* [Highest Response Ratio Next](scheduling/highest_response_ratio_next.py)
* [Job Sequencing With Deadline](scheduling/job_sequencing_with_deadline.py)
* [Multi Level Feedback Queue](scheduling/multi_level_feedback_queue.py)
* [Non Preemptive Shortest Job First](scheduling/non_preemptive_shortest_job_first.py)
* [Round Robin](scheduling/round_robin.py)
* [Shortest Job First](scheduling/shortest_job_first.py)
## Searches
* [Binary Search](searches/binary_search.py)
* [Binary Tree Traversal](searches/binary_tree_traversal.py)
* [Double Linear Search](searches/double_linear_search.py)
* [Double Linear Search Recursion](searches/double_linear_search_recursion.py)
* [Fibonacci Search](searches/fibonacci_search.py)
* [Hill Climbing](searches/hill_climbing.py)
* [Interpolation Search](searches/interpolation_search.py)
* [Jump Search](searches/jump_search.py)
* [Linear Search](searches/linear_search.py)
* [Quick Select](searches/quick_select.py)
* [Sentinel Linear Search](searches/sentinel_linear_search.py)
* [Simple Binary Search](searches/simple_binary_search.py)
* [Simulated Annealing](searches/simulated_annealing.py)
* [Tabu Search](searches/tabu_search.py)
* [Ternary Search](searches/ternary_search.py)
## Sorts
* [Bead Sort](sorts/bead_sort.py)
* [Binary Insertion Sort](sorts/binary_insertion_sort.py)
* [Bitonic Sort](sorts/bitonic_sort.py)
* [Bogo Sort](sorts/bogo_sort.py)
* [Bubble Sort](sorts/bubble_sort.py)
* [Bucket Sort](sorts/bucket_sort.py)
* [Circle Sort](sorts/circle_sort.py)
* [Cocktail Shaker Sort](sorts/cocktail_shaker_sort.py)
* [Comb Sort](sorts/comb_sort.py)
* [Counting Sort](sorts/counting_sort.py)
* [Cycle Sort](sorts/cycle_sort.py)
* [Double Sort](sorts/double_sort.py)
* [Dutch National Flag Sort](sorts/dutch_national_flag_sort.py)
* [Exchange Sort](sorts/exchange_sort.py)
* [External Sort](sorts/external_sort.py)
* [Gnome Sort](sorts/gnome_sort.py)
* [Heap Sort](sorts/heap_sort.py)
* [Insertion Sort](sorts/insertion_sort.py)
* [Intro Sort](sorts/intro_sort.py)
* [Iterative Merge Sort](sorts/iterative_merge_sort.py)
* [Merge Insertion Sort](sorts/merge_insertion_sort.py)
* [Merge Sort](sorts/merge_sort.py)
* [Msd Radix Sort](sorts/msd_radix_sort.py)
* [Natural Sort](sorts/natural_sort.py)
* [Odd Even Sort](sorts/odd_even_sort.py)
* [Odd Even Transposition Parallel](sorts/odd_even_transposition_parallel.py)
* [Odd Even Transposition Single Threaded](sorts/odd_even_transposition_single_threaded.py)
* [Pancake Sort](sorts/pancake_sort.py)
* [Patience Sort](sorts/patience_sort.py)
* [Pigeon Sort](sorts/pigeon_sort.py)
* [Pigeonhole Sort](sorts/pigeonhole_sort.py)
* [Quick Sort](sorts/quick_sort.py)
* [Quick Sort 3 Partition](sorts/quick_sort_3_partition.py)
* [Radix Sort](sorts/radix_sort.py)
* [Recursive Bubble Sort](sorts/recursive_bubble_sort.py)
* [Recursive Insertion Sort](sorts/recursive_insertion_sort.py)
* [Recursive Mergesort Array](sorts/recursive_mergesort_array.py)
* [Recursive Quick Sort](sorts/recursive_quick_sort.py)
* [Selection Sort](sorts/selection_sort.py)
* [Shell Sort](sorts/shell_sort.py)
* [Shrink Shell Sort](sorts/shrink_shell_sort.py)
* [Slowsort](sorts/slowsort.py)
* [Stooge Sort](sorts/stooge_sort.py)
* [Strand Sort](sorts/strand_sort.py)
* [Tim Sort](sorts/tim_sort.py)
* [Topological Sort](sorts/topological_sort.py)
* [Tree Sort](sorts/tree_sort.py)
* [Unknown Sort](sorts/unknown_sort.py)
* [Wiggle Sort](sorts/wiggle_sort.py)
## Strings
* [Aho Corasick](strings/aho_corasick.py)
* [Alternative String Arrange](strings/alternative_string_arrange.py)
* [Anagrams](strings/anagrams.py)
* [Autocomplete Using Trie](strings/autocomplete_using_trie.py)
* [Barcode Validator](strings/barcode_validator.py)
* [Boyer Moore Search](strings/boyer_moore_search.py)
* [Can String Be Rearranged As Palindrome](strings/can_string_be_rearranged_as_palindrome.py)
* [Capitalize](strings/capitalize.py)
* [Check Anagrams](strings/check_anagrams.py)
* [Credit Card Validator](strings/credit_card_validator.py)
* [Detecting English Programmatically](strings/detecting_english_programmatically.py)
* [Dna](strings/dna.py)
* [Frequency Finder](strings/frequency_finder.py)
* [Hamming Distance](strings/hamming_distance.py)
* [Indian Phone Validator](strings/indian_phone_validator.py)
* [Is Contains Unique Chars](strings/is_contains_unique_chars.py)
* [Is Isogram](strings/is_isogram.py)
* [Is Pangram](strings/is_pangram.py)
* [Is Spain National Id](strings/is_spain_national_id.py)
* [Is Srilankan Phone Number](strings/is_srilankan_phone_number.py)
* [Is Valid Email Address](strings/is_valid_email_address.py)
* [Jaro Winkler](strings/jaro_winkler.py)
* [Join](strings/join.py)
* [Knuth Morris Pratt](strings/knuth_morris_pratt.py)
* [Levenshtein Distance](strings/levenshtein_distance.py)
* [Lower](strings/lower.py)
* [Manacher](strings/manacher.py)
* [Min Cost String Conversion](strings/min_cost_string_conversion.py)
* [Naive String Search](strings/naive_string_search.py)
* [Ngram](strings/ngram.py)
* [Palindrome](strings/palindrome.py)
* [Prefix Function](strings/prefix_function.py)
* [Rabin Karp](strings/rabin_karp.py)
* [Remove Duplicate](strings/remove_duplicate.py)
* [Reverse Letters](strings/reverse_letters.py)
* [Reverse Long Words](strings/reverse_long_words.py)
* [Reverse Words](strings/reverse_words.py)
* [Snake Case To Camel Pascal Case](strings/snake_case_to_camel_pascal_case.py)
* [Split](strings/split.py)
* [String Switch Case](strings/string_switch_case.py)
* [Text Justification](strings/text_justification.py)
* [Top K Frequent Words](strings/top_k_frequent_words.py)
* [Upper](strings/upper.py)
* [Wave](strings/wave.py)
* [Wildcard Pattern Matching](strings/wildcard_pattern_matching.py)
* [Word Occurrence](strings/word_occurrence.py)
* [Word Patterns](strings/word_patterns.py)
* [Z Function](strings/z_function.py)
## Web Programming
* [Co2 Emission](web_programming/co2_emission.py)
* [Covid Stats Via Xpath](web_programming/covid_stats_via_xpath.py)
* [Crawl Google Results](web_programming/crawl_google_results.py)
* [Crawl Google Scholar Citation](web_programming/crawl_google_scholar_citation.py)
* [Currency Converter](web_programming/currency_converter.py)
* [Current Stock Price](web_programming/current_stock_price.py)
* [Current Weather](web_programming/current_weather.py)
* [Daily Horoscope](web_programming/daily_horoscope.py)
* [Download Images From Google Query](web_programming/download_images_from_google_query.py)
* [Emails From Url](web_programming/emails_from_url.py)
* [Fetch Anime And Play](web_programming/fetch_anime_and_play.py)
* [Fetch Bbc News](web_programming/fetch_bbc_news.py)
* [Fetch Github Info](web_programming/fetch_github_info.py)
* [Fetch Jobs](web_programming/fetch_jobs.py)
* [Fetch Quotes](web_programming/fetch_quotes.py)
* [Fetch Well Rx Price](web_programming/fetch_well_rx_price.py)
* [Get Amazon Product Data](web_programming/get_amazon_product_data.py)
* [Get Imdb Top 250 Movies Csv](web_programming/get_imdb_top_250_movies_csv.py)
* [Get Imdbtop](web_programming/get_imdbtop.py)
* [Get Top Billionaires](web_programming/get_top_billionaires.py)
* [Get Top Hn Posts](web_programming/get_top_hn_posts.py)
* [Get User Tweets](web_programming/get_user_tweets.py)
* [Giphy](web_programming/giphy.py)
* [Instagram Crawler](web_programming/instagram_crawler.py)
* [Instagram Pic](web_programming/instagram_pic.py)
* [Instagram Video](web_programming/instagram_video.py)
* [Nasa Data](web_programming/nasa_data.py)
* [Open Google Results](web_programming/open_google_results.py)
* [Random Anime Character](web_programming/random_anime_character.py)
* [Recaptcha Verification](web_programming/recaptcha_verification.py)
* [Reddit](web_programming/reddit.py)
* [Search Books By Isbn](web_programming/search_books_by_isbn.py)
* [Slack Message](web_programming/slack_message.py)
* [Test Fetch Github Info](web_programming/test_fetch_github_info.py)
* [World Covid19 Stats](web_programming/world_covid19_stats.py)
|
## Arithmetic Analysis
* [Bisection](arithmetic_analysis/bisection.py)
* [Gaussian Elimination](arithmetic_analysis/gaussian_elimination.py)
* [In Static Equilibrium](arithmetic_analysis/in_static_equilibrium.py)
* [Intersection](arithmetic_analysis/intersection.py)
* [Jacobi Iteration Method](arithmetic_analysis/jacobi_iteration_method.py)
* [Lu Decomposition](arithmetic_analysis/lu_decomposition.py)
* [Newton Forward Interpolation](arithmetic_analysis/newton_forward_interpolation.py)
* [Newton Method](arithmetic_analysis/newton_method.py)
* [Newton Raphson](arithmetic_analysis/newton_raphson.py)
* [Newton Raphson New](arithmetic_analysis/newton_raphson_new.py)
* [Secant Method](arithmetic_analysis/secant_method.py)
## Audio Filters
* [Butterworth Filter](audio_filters/butterworth_filter.py)
* [Iir Filter](audio_filters/iir_filter.py)
* [Show Response](audio_filters/show_response.py)
## Backtracking
* [All Combinations](backtracking/all_combinations.py)
* [All Permutations](backtracking/all_permutations.py)
* [All Subsequences](backtracking/all_subsequences.py)
* [Coloring](backtracking/coloring.py)
* [Combination Sum](backtracking/combination_sum.py)
* [Hamiltonian Cycle](backtracking/hamiltonian_cycle.py)
* [Knight Tour](backtracking/knight_tour.py)
* [Minimax](backtracking/minimax.py)
* [N Queens](backtracking/n_queens.py)
* [N Queens Math](backtracking/n_queens_math.py)
* [Power Sum](backtracking/power_sum.py)
* [Rat In Maze](backtracking/rat_in_maze.py)
* [Sudoku](backtracking/sudoku.py)
* [Sum Of Subsets](backtracking/sum_of_subsets.py)
* [Word Search](backtracking/word_search.py)
## Bit Manipulation
* [Binary And Operator](bit_manipulation/binary_and_operator.py)
* [Binary Count Setbits](bit_manipulation/binary_count_setbits.py)
* [Binary Count Trailing Zeros](bit_manipulation/binary_count_trailing_zeros.py)
* [Binary Or Operator](bit_manipulation/binary_or_operator.py)
* [Binary Shifts](bit_manipulation/binary_shifts.py)
* [Binary Twos Complement](bit_manipulation/binary_twos_complement.py)
* [Binary Xor Operator](bit_manipulation/binary_xor_operator.py)
* [Bitwise Addition Recursive](bit_manipulation/bitwise_addition_recursive.py)
* [Count 1S Brian Kernighan Method](bit_manipulation/count_1s_brian_kernighan_method.py)
* [Count Number Of One Bits](bit_manipulation/count_number_of_one_bits.py)
* [Gray Code Sequence](bit_manipulation/gray_code_sequence.py)
* [Highest Set Bit](bit_manipulation/highest_set_bit.py)
* [Index Of Rightmost Set Bit](bit_manipulation/index_of_rightmost_set_bit.py)
* [Is Even](bit_manipulation/is_even.py)
* [Is Power Of Two](bit_manipulation/is_power_of_two.py)
* [Missing Number](bit_manipulation/missing_number.py)
* [Numbers Different Signs](bit_manipulation/numbers_different_signs.py)
* [Reverse Bits](bit_manipulation/reverse_bits.py)
* [Single Bit Manipulation Operations](bit_manipulation/single_bit_manipulation_operations.py)
## Blockchain
* [Chinese Remainder Theorem](blockchain/chinese_remainder_theorem.py)
* [Diophantine Equation](blockchain/diophantine_equation.py)
* [Modular Division](blockchain/modular_division.py)
## Boolean Algebra
* [And Gate](boolean_algebra/and_gate.py)
* [Nand Gate](boolean_algebra/nand_gate.py)
* [Nor Gate](boolean_algebra/nor_gate.py)
* [Not Gate](boolean_algebra/not_gate.py)
* [Or Gate](boolean_algebra/or_gate.py)
* [Quine Mc Cluskey](boolean_algebra/quine_mc_cluskey.py)
* [Xnor Gate](boolean_algebra/xnor_gate.py)
* [Xor Gate](boolean_algebra/xor_gate.py)
## Cellular Automata
* [Conways Game Of Life](cellular_automata/conways_game_of_life.py)
* [Game Of Life](cellular_automata/game_of_life.py)
* [Langtons Ant](cellular_automata/langtons_ant.py)
* [Nagel Schrekenberg](cellular_automata/nagel_schrekenberg.py)
* [One Dimensional](cellular_automata/one_dimensional.py)
* [Wa Tor](cellular_automata/wa_tor.py)
## Ciphers
* [A1Z26](ciphers/a1z26.py)
* [Affine Cipher](ciphers/affine_cipher.py)
* [Atbash](ciphers/atbash.py)
* [Autokey](ciphers/autokey.py)
* [Baconian Cipher](ciphers/baconian_cipher.py)
* [Base16](ciphers/base16.py)
* [Base32](ciphers/base32.py)
* [Base64](ciphers/base64.py)
* [Base85](ciphers/base85.py)
* [Beaufort Cipher](ciphers/beaufort_cipher.py)
* [Bifid](ciphers/bifid.py)
* [Brute Force Caesar Cipher](ciphers/brute_force_caesar_cipher.py)
* [Caesar Cipher](ciphers/caesar_cipher.py)
* [Cryptomath Module](ciphers/cryptomath_module.py)
* [Decrypt Caesar With Chi Squared](ciphers/decrypt_caesar_with_chi_squared.py)
* [Deterministic Miller Rabin](ciphers/deterministic_miller_rabin.py)
* [Diffie](ciphers/diffie.py)
* [Diffie Hellman](ciphers/diffie_hellman.py)
* [Elgamal Key Generator](ciphers/elgamal_key_generator.py)
* [Enigma Machine2](ciphers/enigma_machine2.py)
* [Hill Cipher](ciphers/hill_cipher.py)
* [Mixed Keyword Cypher](ciphers/mixed_keyword_cypher.py)
* [Mono Alphabetic Ciphers](ciphers/mono_alphabetic_ciphers.py)
* [Morse Code](ciphers/morse_code.py)
* [Onepad Cipher](ciphers/onepad_cipher.py)
* [Playfair Cipher](ciphers/playfair_cipher.py)
* [Polybius](ciphers/polybius.py)
* [Porta Cipher](ciphers/porta_cipher.py)
* [Rabin Miller](ciphers/rabin_miller.py)
* [Rail Fence Cipher](ciphers/rail_fence_cipher.py)
* [Rot13](ciphers/rot13.py)
* [Rsa Cipher](ciphers/rsa_cipher.py)
* [Rsa Factorization](ciphers/rsa_factorization.py)
* [Rsa Key Generator](ciphers/rsa_key_generator.py)
* [Shuffled Shift Cipher](ciphers/shuffled_shift_cipher.py)
* [Simple Keyword Cypher](ciphers/simple_keyword_cypher.py)
* [Simple Substitution Cipher](ciphers/simple_substitution_cipher.py)
* [Trafid Cipher](ciphers/trafid_cipher.py)
* [Transposition Cipher](ciphers/transposition_cipher.py)
* [Transposition Cipher Encrypt Decrypt File](ciphers/transposition_cipher_encrypt_decrypt_file.py)
* [Vigenere Cipher](ciphers/vigenere_cipher.py)
* [Xor Cipher](ciphers/xor_cipher.py)
## Compression
* [Burrows Wheeler](compression/burrows_wheeler.py)
* [Huffman](compression/huffman.py)
* [Lempel Ziv](compression/lempel_ziv.py)
* [Lempel Ziv Decompress](compression/lempel_ziv_decompress.py)
* [Lz77](compression/lz77.py)
* [Peak Signal To Noise Ratio](compression/peak_signal_to_noise_ratio.py)
* [Run Length Encoding](compression/run_length_encoding.py)
## Computer Vision
* [Flip Augmentation](computer_vision/flip_augmentation.py)
* [Haralick Descriptors](computer_vision/haralick_descriptors.py)
* [Harris Corner](computer_vision/harris_corner.py)
* [Horn Schunck](computer_vision/horn_schunck.py)
* [Mean Threshold](computer_vision/mean_threshold.py)
* [Mosaic Augmentation](computer_vision/mosaic_augmentation.py)
* [Pooling Functions](computer_vision/pooling_functions.py)
## Conversions
* [Astronomical Length Scale Conversion](conversions/astronomical_length_scale_conversion.py)
* [Binary To Decimal](conversions/binary_to_decimal.py)
* [Binary To Hexadecimal](conversions/binary_to_hexadecimal.py)
* [Binary To Octal](conversions/binary_to_octal.py)
* [Convert Number To Words](conversions/convert_number_to_words.py)
* [Decimal To Any](conversions/decimal_to_any.py)
* [Decimal To Binary](conversions/decimal_to_binary.py)
* [Decimal To Hexadecimal](conversions/decimal_to_hexadecimal.py)
* [Decimal To Octal](conversions/decimal_to_octal.py)
* [Energy Conversions](conversions/energy_conversions.py)
* [Excel Title To Column](conversions/excel_title_to_column.py)
* [Hex To Bin](conversions/hex_to_bin.py)
* [Hexadecimal To Decimal](conversions/hexadecimal_to_decimal.py)
* [Length Conversion](conversions/length_conversion.py)
* [Molecular Chemistry](conversions/molecular_chemistry.py)
* [Octal To Binary](conversions/octal_to_binary.py)
* [Octal To Decimal](conversions/octal_to_decimal.py)
* [Prefix Conversions](conversions/prefix_conversions.py)
* [Prefix Conversions String](conversions/prefix_conversions_string.py)
* [Pressure Conversions](conversions/pressure_conversions.py)
* [Rgb Hsv Conversion](conversions/rgb_hsv_conversion.py)
* [Roman Numerals](conversions/roman_numerals.py)
* [Speed Conversions](conversions/speed_conversions.py)
* [Temperature Conversions](conversions/temperature_conversions.py)
* [Volume Conversions](conversions/volume_conversions.py)
* [Weight Conversion](conversions/weight_conversion.py)
## Data Structures
* Arrays
* [Permutations](data_structures/arrays/permutations.py)
* [Prefix Sum](data_structures/arrays/prefix_sum.py)
* [Product Sum](data_structures/arrays/product_sum.py)
* Binary Tree
* [Avl Tree](data_structures/binary_tree/avl_tree.py)
* [Basic Binary Tree](data_structures/binary_tree/basic_binary_tree.py)
* [Binary Search Tree](data_structures/binary_tree/binary_search_tree.py)
* [Binary Search Tree Recursive](data_structures/binary_tree/binary_search_tree_recursive.py)
* [Binary Tree Mirror](data_structures/binary_tree/binary_tree_mirror.py)
* [Binary Tree Node Sum](data_structures/binary_tree/binary_tree_node_sum.py)
* [Binary Tree Path Sum](data_structures/binary_tree/binary_tree_path_sum.py)
* [Binary Tree Traversals](data_structures/binary_tree/binary_tree_traversals.py)
* [Diff Views Of Binary Tree](data_structures/binary_tree/diff_views_of_binary_tree.py)
* [Distribute Coins](data_structures/binary_tree/distribute_coins.py)
* [Fenwick Tree](data_structures/binary_tree/fenwick_tree.py)
* [Inorder Tree Traversal 2022](data_structures/binary_tree/inorder_tree_traversal_2022.py)
* [Is Bst](data_structures/binary_tree/is_bst.py)
* [Lazy Segment Tree](data_structures/binary_tree/lazy_segment_tree.py)
* [Lowest Common Ancestor](data_structures/binary_tree/lowest_common_ancestor.py)
* [Maximum Fenwick Tree](data_structures/binary_tree/maximum_fenwick_tree.py)
* [Merge Two Binary Trees](data_structures/binary_tree/merge_two_binary_trees.py)
* [Non Recursive Segment Tree](data_structures/binary_tree/non_recursive_segment_tree.py)
* [Number Of Possible Binary Trees](data_structures/binary_tree/number_of_possible_binary_trees.py)
* [Red Black Tree](data_structures/binary_tree/red_black_tree.py)
* [Segment Tree](data_structures/binary_tree/segment_tree.py)
* [Segment Tree Other](data_structures/binary_tree/segment_tree_other.py)
* [Treap](data_structures/binary_tree/treap.py)
* [Wavelet Tree](data_structures/binary_tree/wavelet_tree.py)
* Disjoint Set
* [Alternate Disjoint Set](data_structures/disjoint_set/alternate_disjoint_set.py)
* [Disjoint Set](data_structures/disjoint_set/disjoint_set.py)
* Hashing
* [Bloom Filter](data_structures/hashing/bloom_filter.py)
* [Double Hash](data_structures/hashing/double_hash.py)
* [Hash Map](data_structures/hashing/hash_map.py)
* [Hash Table](data_structures/hashing/hash_table.py)
* [Hash Table With Linked List](data_structures/hashing/hash_table_with_linked_list.py)
* Number Theory
* [Prime Numbers](data_structures/hashing/number_theory/prime_numbers.py)
* [Quadratic Probing](data_structures/hashing/quadratic_probing.py)
* Tests
* [Test Hash Map](data_structures/hashing/tests/test_hash_map.py)
* Heap
* [Binomial Heap](data_structures/heap/binomial_heap.py)
* [Heap](data_structures/heap/heap.py)
* [Heap Generic](data_structures/heap/heap_generic.py)
* [Max Heap](data_structures/heap/max_heap.py)
* [Min Heap](data_structures/heap/min_heap.py)
* [Randomized Heap](data_structures/heap/randomized_heap.py)
* [Skew Heap](data_structures/heap/skew_heap.py)
* Linked List
* [Circular Linked List](data_structures/linked_list/circular_linked_list.py)
* [Deque Doubly](data_structures/linked_list/deque_doubly.py)
* [Doubly Linked List](data_structures/linked_list/doubly_linked_list.py)
* [Doubly Linked List Two](data_structures/linked_list/doubly_linked_list_two.py)
* [From Sequence](data_structures/linked_list/from_sequence.py)
* [Has Loop](data_structures/linked_list/has_loop.py)
* [Is Palindrome](data_structures/linked_list/is_palindrome.py)
* [Merge Two Lists](data_structures/linked_list/merge_two_lists.py)
* [Middle Element Of Linked List](data_structures/linked_list/middle_element_of_linked_list.py)
* [Print Reverse](data_structures/linked_list/print_reverse.py)
* [Reverse K Group](data_structures/linked_list/reverse_k_group.py)
* [Rotate To The Right](data_structures/linked_list/rotate_to_the_right.py)
* [Singly Linked List](data_structures/linked_list/singly_linked_list.py)
* [Skip List](data_structures/linked_list/skip_list.py)
* [Swap Nodes](data_structures/linked_list/swap_nodes.py)
* Queue
* [Circular Queue](data_structures/queue/circular_queue.py)
* [Circular Queue Linked List](data_structures/queue/circular_queue_linked_list.py)
* [Double Ended Queue](data_structures/queue/double_ended_queue.py)
* [Linked Queue](data_structures/queue/linked_queue.py)
* [Priority Queue Using List](data_structures/queue/priority_queue_using_list.py)
* [Queue By List](data_structures/queue/queue_by_list.py)
* [Queue By Two Stacks](data_structures/queue/queue_by_two_stacks.py)
* [Queue On Pseudo Stack](data_structures/queue/queue_on_pseudo_stack.py)
* Stacks
* [Balanced Parentheses](data_structures/stacks/balanced_parentheses.py)
* [Dijkstras Two Stack Algorithm](data_structures/stacks/dijkstras_two_stack_algorithm.py)
* [Infix To Postfix Conversion](data_structures/stacks/infix_to_postfix_conversion.py)
* [Infix To Prefix Conversion](data_structures/stacks/infix_to_prefix_conversion.py)
* [Next Greater Element](data_structures/stacks/next_greater_element.py)
* [Postfix Evaluation](data_structures/stacks/postfix_evaluation.py)
* [Prefix Evaluation](data_structures/stacks/prefix_evaluation.py)
* [Stack](data_structures/stacks/stack.py)
* [Stack With Doubly Linked List](data_structures/stacks/stack_with_doubly_linked_list.py)
* [Stack With Singly Linked List](data_structures/stacks/stack_with_singly_linked_list.py)
* [Stock Span Problem](data_structures/stacks/stock_span_problem.py)
* Trie
* [Radix Tree](data_structures/trie/radix_tree.py)
* [Trie](data_structures/trie/trie.py)
## Digital Image Processing
* [Change Brightness](digital_image_processing/change_brightness.py)
* [Change Contrast](digital_image_processing/change_contrast.py)
* [Convert To Negative](digital_image_processing/convert_to_negative.py)
* Dithering
* [Burkes](digital_image_processing/dithering/burkes.py)
* Edge Detection
* [Canny](digital_image_processing/edge_detection/canny.py)
* Filters
* [Bilateral Filter](digital_image_processing/filters/bilateral_filter.py)
* [Convolve](digital_image_processing/filters/convolve.py)
* [Gabor Filter](digital_image_processing/filters/gabor_filter.py)
* [Gaussian Filter](digital_image_processing/filters/gaussian_filter.py)
* [Local Binary Pattern](digital_image_processing/filters/local_binary_pattern.py)
* [Median Filter](digital_image_processing/filters/median_filter.py)
* [Sobel Filter](digital_image_processing/filters/sobel_filter.py)
* Histogram Equalization
* [Histogram Stretch](digital_image_processing/histogram_equalization/histogram_stretch.py)
* [Index Calculation](digital_image_processing/index_calculation.py)
* Morphological Operations
* [Dilation Operation](digital_image_processing/morphological_operations/dilation_operation.py)
* [Erosion Operation](digital_image_processing/morphological_operations/erosion_operation.py)
* Resize
* [Resize](digital_image_processing/resize/resize.py)
* Rotation
* [Rotation](digital_image_processing/rotation/rotation.py)
* [Sepia](digital_image_processing/sepia.py)
* [Test Digital Image Processing](digital_image_processing/test_digital_image_processing.py)
## Divide And Conquer
* [Closest Pair Of Points](divide_and_conquer/closest_pair_of_points.py)
* [Convex Hull](divide_and_conquer/convex_hull.py)
* [Heaps Algorithm](divide_and_conquer/heaps_algorithm.py)
* [Heaps Algorithm Iterative](divide_and_conquer/heaps_algorithm_iterative.py)
* [Inversions](divide_and_conquer/inversions.py)
* [Kth Order Statistic](divide_and_conquer/kth_order_statistic.py)
* [Max Difference Pair](divide_and_conquer/max_difference_pair.py)
* [Max Subarray](divide_and_conquer/max_subarray.py)
* [Mergesort](divide_and_conquer/mergesort.py)
* [Peak](divide_and_conquer/peak.py)
* [Power](divide_and_conquer/power.py)
* [Strassen Matrix Multiplication](divide_and_conquer/strassen_matrix_multiplication.py)
## Dynamic Programming
* [Abbreviation](dynamic_programming/abbreviation.py)
* [All Construct](dynamic_programming/all_construct.py)
* [Bitmask](dynamic_programming/bitmask.py)
* [Catalan Numbers](dynamic_programming/catalan_numbers.py)
* [Climbing Stairs](dynamic_programming/climbing_stairs.py)
* [Combination Sum Iv](dynamic_programming/combination_sum_iv.py)
* [Edit Distance](dynamic_programming/edit_distance.py)
* [Factorial](dynamic_programming/factorial.py)
* [Fast Fibonacci](dynamic_programming/fast_fibonacci.py)
* [Fibonacci](dynamic_programming/fibonacci.py)
* [Fizz Buzz](dynamic_programming/fizz_buzz.py)
* [Floyd Warshall](dynamic_programming/floyd_warshall.py)
* [Integer Partition](dynamic_programming/integer_partition.py)
* [Iterating Through Submasks](dynamic_programming/iterating_through_submasks.py)
* [Knapsack](dynamic_programming/knapsack.py)
* [Longest Common Subsequence](dynamic_programming/longest_common_subsequence.py)
* [Longest Common Substring](dynamic_programming/longest_common_substring.py)
* [Longest Increasing Subsequence](dynamic_programming/longest_increasing_subsequence.py)
* [Longest Increasing Subsequence O(Nlogn)](dynamic_programming/longest_increasing_subsequence_o(nlogn).py)
* [Longest Sub Array](dynamic_programming/longest_sub_array.py)
* [Matrix Chain Order](dynamic_programming/matrix_chain_order.py)
* [Max Non Adjacent Sum](dynamic_programming/max_non_adjacent_sum.py)
* [Max Product Subarray](dynamic_programming/max_product_subarray.py)
* [Max Subarray Sum](dynamic_programming/max_subarray_sum.py)
* [Min Distance Up Bottom](dynamic_programming/min_distance_up_bottom.py)
* [Minimum Coin Change](dynamic_programming/minimum_coin_change.py)
* [Minimum Cost Path](dynamic_programming/minimum_cost_path.py)
* [Minimum Partition](dynamic_programming/minimum_partition.py)
* [Minimum Size Subarray Sum](dynamic_programming/minimum_size_subarray_sum.py)
* [Minimum Squares To Represent A Number](dynamic_programming/minimum_squares_to_represent_a_number.py)
* [Minimum Steps To One](dynamic_programming/minimum_steps_to_one.py)
* [Minimum Tickets Cost](dynamic_programming/minimum_tickets_cost.py)
* [Optimal Binary Search Tree](dynamic_programming/optimal_binary_search_tree.py)
* [Palindrome Partitioning](dynamic_programming/palindrome_partitioning.py)
* [Regex Match](dynamic_programming/regex_match.py)
* [Rod Cutting](dynamic_programming/rod_cutting.py)
* [Smith Waterman](dynamic_programming/smith_waterman.py)
* [Subset Generation](dynamic_programming/subset_generation.py)
* [Sum Of Subset](dynamic_programming/sum_of_subset.py)
* [Tribonacci](dynamic_programming/tribonacci.py)
* [Viterbi](dynamic_programming/viterbi.py)
* [Word Break](dynamic_programming/word_break.py)
## Electronics
* [Apparent Power](electronics/apparent_power.py)
* [Builtin Voltage](electronics/builtin_voltage.py)
* [Carrier Concentration](electronics/carrier_concentration.py)
* [Circular Convolution](electronics/circular_convolution.py)
* [Coulombs Law](electronics/coulombs_law.py)
* [Electric Conductivity](electronics/electric_conductivity.py)
* [Electric Power](electronics/electric_power.py)
* [Electrical Impedance](electronics/electrical_impedance.py)
* [Ind Reactance](electronics/ind_reactance.py)
* [Ohms Law](electronics/ohms_law.py)
* [Real And Reactive Power](electronics/real_and_reactive_power.py)
* [Resistor Equivalence](electronics/resistor_equivalence.py)
* [Resonant Frequency](electronics/resonant_frequency.py)
## File Transfer
* [Receive File](file_transfer/receive_file.py)
* [Send File](file_transfer/send_file.py)
* Tests
* [Test Send File](file_transfer/tests/test_send_file.py)
## Financial
* [Equated Monthly Installments](financial/equated_monthly_installments.py)
* [Interest](financial/interest.py)
* [Present Value](financial/present_value.py)
* [Price Plus Tax](financial/price_plus_tax.py)
## Fractals
* [Julia Sets](fractals/julia_sets.py)
* [Koch Snowflake](fractals/koch_snowflake.py)
* [Mandelbrot](fractals/mandelbrot.py)
* [Sierpinski Triangle](fractals/sierpinski_triangle.py)
## Genetic Algorithm
* [Basic String](genetic_algorithm/basic_string.py)
## Geodesy
* [Haversine Distance](geodesy/haversine_distance.py)
* [Lamberts Ellipsoidal Distance](geodesy/lamberts_ellipsoidal_distance.py)
## Graphics
* [Bezier Curve](graphics/bezier_curve.py)
* [Vector3 For 2D Rendering](graphics/vector3_for_2d_rendering.py)
## Graphs
* [A Star](graphs/a_star.py)
* [Articulation Points](graphs/articulation_points.py)
* [Basic Graphs](graphs/basic_graphs.py)
* [Bellman Ford](graphs/bellman_ford.py)
* [Bi Directional Dijkstra](graphs/bi_directional_dijkstra.py)
* [Bidirectional A Star](graphs/bidirectional_a_star.py)
* [Bidirectional Breadth First Search](graphs/bidirectional_breadth_first_search.py)
* [Boruvka](graphs/boruvka.py)
* [Breadth First Search](graphs/breadth_first_search.py)
* [Breadth First Search 2](graphs/breadth_first_search_2.py)
* [Breadth First Search Shortest Path](graphs/breadth_first_search_shortest_path.py)
* [Breadth First Search Shortest Path 2](graphs/breadth_first_search_shortest_path_2.py)
* [Breadth First Search Zero One Shortest Path](graphs/breadth_first_search_zero_one_shortest_path.py)
* [Check Bipartite Graph Bfs](graphs/check_bipartite_graph_bfs.py)
* [Check Bipartite Graph Dfs](graphs/check_bipartite_graph_dfs.py)
* [Check Cycle](graphs/check_cycle.py)
* [Connected Components](graphs/connected_components.py)
* [Depth First Search](graphs/depth_first_search.py)
* [Depth First Search 2](graphs/depth_first_search_2.py)
* [Dijkstra](graphs/dijkstra.py)
* [Dijkstra 2](graphs/dijkstra_2.py)
* [Dijkstra Algorithm](graphs/dijkstra_algorithm.py)
* [Dijkstra Alternate](graphs/dijkstra_alternate.py)
* [Dijkstra Binary Grid](graphs/dijkstra_binary_grid.py)
* [Dinic](graphs/dinic.py)
* [Directed And Undirected (Weighted) Graph](graphs/directed_and_undirected_(weighted)_graph.py)
* [Edmonds Karp Multiple Source And Sink](graphs/edmonds_karp_multiple_source_and_sink.py)
* [Eulerian Path And Circuit For Undirected Graph](graphs/eulerian_path_and_circuit_for_undirected_graph.py)
* [Even Tree](graphs/even_tree.py)
* [Finding Bridges](graphs/finding_bridges.py)
* [Frequent Pattern Graph Miner](graphs/frequent_pattern_graph_miner.py)
* [G Topological Sort](graphs/g_topological_sort.py)
* [Gale Shapley Bigraph](graphs/gale_shapley_bigraph.py)
* [Graph Adjacency List](graphs/graph_adjacency_list.py)
* [Graph Adjacency Matrix](graphs/graph_adjacency_matrix.py)
* [Graph List](graphs/graph_list.py)
* [Graphs Floyd Warshall](graphs/graphs_floyd_warshall.py)
* [Greedy Best First](graphs/greedy_best_first.py)
* [Greedy Min Vertex Cover](graphs/greedy_min_vertex_cover.py)
* [Kahns Algorithm Long](graphs/kahns_algorithm_long.py)
* [Kahns Algorithm Topo](graphs/kahns_algorithm_topo.py)
* [Karger](graphs/karger.py)
* [Markov Chain](graphs/markov_chain.py)
* [Matching Min Vertex Cover](graphs/matching_min_vertex_cover.py)
* [Minimum Path Sum](graphs/minimum_path_sum.py)
* [Minimum Spanning Tree Boruvka](graphs/minimum_spanning_tree_boruvka.py)
* [Minimum Spanning Tree Kruskal](graphs/minimum_spanning_tree_kruskal.py)
* [Minimum Spanning Tree Kruskal2](graphs/minimum_spanning_tree_kruskal2.py)
* [Minimum Spanning Tree Prims](graphs/minimum_spanning_tree_prims.py)
* [Minimum Spanning Tree Prims2](graphs/minimum_spanning_tree_prims2.py)
* [Multi Heuristic Astar](graphs/multi_heuristic_astar.py)
* [Page Rank](graphs/page_rank.py)
* [Prim](graphs/prim.py)
* [Random Graph Generator](graphs/random_graph_generator.py)
* [Scc Kosaraju](graphs/scc_kosaraju.py)
* [Strongly Connected Components](graphs/strongly_connected_components.py)
* [Tarjans Scc](graphs/tarjans_scc.py)
* Tests
* [Test Min Spanning Tree Kruskal](graphs/tests/test_min_spanning_tree_kruskal.py)
* [Test Min Spanning Tree Prim](graphs/tests/test_min_spanning_tree_prim.py)
## Greedy Methods
* [Fractional Knapsack](greedy_methods/fractional_knapsack.py)
* [Fractional Knapsack 2](greedy_methods/fractional_knapsack_2.py)
* [Minimum Waiting Time](greedy_methods/minimum_waiting_time.py)
* [Optimal Merge Pattern](greedy_methods/optimal_merge_pattern.py)
## Hashes
* [Adler32](hashes/adler32.py)
* [Chaos Machine](hashes/chaos_machine.py)
* [Djb2](hashes/djb2.py)
* [Elf](hashes/elf.py)
* [Enigma Machine](hashes/enigma_machine.py)
* [Hamming Code](hashes/hamming_code.py)
* [Luhn](hashes/luhn.py)
* [Md5](hashes/md5.py)
* [Sdbm](hashes/sdbm.py)
* [Sha1](hashes/sha1.py)
* [Sha256](hashes/sha256.py)
## Knapsack
* [Greedy Knapsack](knapsack/greedy_knapsack.py)
* [Knapsack](knapsack/knapsack.py)
* [Recursive Approach Knapsack](knapsack/recursive_approach_knapsack.py)
* Tests
* [Test Greedy Knapsack](knapsack/tests/test_greedy_knapsack.py)
* [Test Knapsack](knapsack/tests/test_knapsack.py)
## Linear Algebra
* Src
* [Conjugate Gradient](linear_algebra/src/conjugate_gradient.py)
* [Lib](linear_algebra/src/lib.py)
* [Polynom For Points](linear_algebra/src/polynom_for_points.py)
* [Power Iteration](linear_algebra/src/power_iteration.py)
* [Rank Of Matrix](linear_algebra/src/rank_of_matrix.py)
* [Rayleigh Quotient](linear_algebra/src/rayleigh_quotient.py)
* [Schur Complement](linear_algebra/src/schur_complement.py)
* [Test Linear Algebra](linear_algebra/src/test_linear_algebra.py)
* [Transformations 2D](linear_algebra/src/transformations_2d.py)
## Linear Programming
* [Simplex](linear_programming/simplex.py)
## Machine Learning
* [Astar](machine_learning/astar.py)
* [Data Transformations](machine_learning/data_transformations.py)
* [Decision Tree](machine_learning/decision_tree.py)
* [Dimensionality Reduction](machine_learning/dimensionality_reduction.py)
* Forecasting
* [Run](machine_learning/forecasting/run.py)
* [Gradient Descent](machine_learning/gradient_descent.py)
* [K Means Clust](machine_learning/k_means_clust.py)
* [K Nearest Neighbours](machine_learning/k_nearest_neighbours.py)
* [Linear Discriminant Analysis](machine_learning/linear_discriminant_analysis.py)
* [Linear Regression](machine_learning/linear_regression.py)
* Local Weighted Learning
* [Local Weighted Learning](machine_learning/local_weighted_learning/local_weighted_learning.py)
* [Logistic Regression](machine_learning/logistic_regression.py)
* [Mfcc](machine_learning/mfcc.py)
* [Multilayer Perceptron Classifier](machine_learning/multilayer_perceptron_classifier.py)
* [Polynomial Regression](machine_learning/polynomial_regression.py)
* [Scoring Functions](machine_learning/scoring_functions.py)
* [Self Organizing Map](machine_learning/self_organizing_map.py)
* [Sequential Minimum Optimization](machine_learning/sequential_minimum_optimization.py)
* [Similarity Search](machine_learning/similarity_search.py)
* [Support Vector Machines](machine_learning/support_vector_machines.py)
* [Word Frequency Functions](machine_learning/word_frequency_functions.py)
* [Xgboost Classifier](machine_learning/xgboost_classifier.py)
* [Xgboost Regressor](machine_learning/xgboost_regressor.py)
## Maths
* [Abs](maths/abs.py)
* [Addition Without Arithmetic](maths/addition_without_arithmetic.py)
* [Aliquot Sum](maths/aliquot_sum.py)
* [Allocation Number](maths/allocation_number.py)
* [Arc Length](maths/arc_length.py)
* [Area](maths/area.py)
* [Area Under Curve](maths/area_under_curve.py)
* [Armstrong Numbers](maths/armstrong_numbers.py)
* [Automorphic Number](maths/automorphic_number.py)
* [Average Absolute Deviation](maths/average_absolute_deviation.py)
* [Average Mean](maths/average_mean.py)
* [Average Median](maths/average_median.py)
* [Average Mode](maths/average_mode.py)
* [Bailey Borwein Plouffe](maths/bailey_borwein_plouffe.py)
* [Basic Maths](maths/basic_maths.py)
* [Binary Exp Mod](maths/binary_exp_mod.py)
* [Binary Exponentiation](maths/binary_exponentiation.py)
* [Binary Exponentiation 2](maths/binary_exponentiation_2.py)
* [Binary Exponentiation 3](maths/binary_exponentiation_3.py)
* [Binomial Coefficient](maths/binomial_coefficient.py)
* [Binomial Distribution](maths/binomial_distribution.py)
* [Bisection](maths/bisection.py)
* [Carmichael Number](maths/carmichael_number.py)
* [Catalan Number](maths/catalan_number.py)
* [Ceil](maths/ceil.py)
* [Check Polygon](maths/check_polygon.py)
* [Chudnovsky Algorithm](maths/chudnovsky_algorithm.py)
* [Collatz Sequence](maths/collatz_sequence.py)
* [Combinations](maths/combinations.py)
* [Continued Fraction](maths/continued_fraction.py)
* [Decimal Isolate](maths/decimal_isolate.py)
* [Decimal To Fraction](maths/decimal_to_fraction.py)
* [Dodecahedron](maths/dodecahedron.py)
* [Double Factorial Iterative](maths/double_factorial_iterative.py)
* [Double Factorial Recursive](maths/double_factorial_recursive.py)
* [Dual Number Automatic Differentiation](maths/dual_number_automatic_differentiation.py)
* [Entropy](maths/entropy.py)
* [Euclidean Distance](maths/euclidean_distance.py)
* [Euler Method](maths/euler_method.py)
* [Euler Modified](maths/euler_modified.py)
* [Eulers Totient](maths/eulers_totient.py)
* [Extended Euclidean Algorithm](maths/extended_euclidean_algorithm.py)
* [Factorial](maths/factorial.py)
* [Factors](maths/factors.py)
* [Fermat Little Theorem](maths/fermat_little_theorem.py)
* [Fibonacci](maths/fibonacci.py)
* [Find Max](maths/find_max.py)
* [Find Min](maths/find_min.py)
* [Floor](maths/floor.py)
* [Gamma](maths/gamma.py)
* [Gamma Recursive](maths/gamma_recursive.py)
* [Gaussian](maths/gaussian.py)
* [Gaussian Error Linear Unit](maths/gaussian_error_linear_unit.py)
* [Gcd Of N Numbers](maths/gcd_of_n_numbers.py)
* [Greatest Common Divisor](maths/greatest_common_divisor.py)
* [Greedy Coin Change](maths/greedy_coin_change.py)
* [Hamming Numbers](maths/hamming_numbers.py)
* [Hardy Ramanujanalgo](maths/hardy_ramanujanalgo.py)
* [Harshad Numbers](maths/harshad_numbers.py)
* [Hexagonal Number](maths/hexagonal_number.py)
* [Integration By Simpson Approx](maths/integration_by_simpson_approx.py)
* [Interquartile Range](maths/interquartile_range.py)
* [Is Int Palindrome](maths/is_int_palindrome.py)
* [Is Ip V4 Address Valid](maths/is_ip_v4_address_valid.py)
* [Is Square Free](maths/is_square_free.py)
* [Jaccard Similarity](maths/jaccard_similarity.py)
* [Juggler Sequence](maths/juggler_sequence.py)
* [Karatsuba](maths/karatsuba.py)
* [Krishnamurthy Number](maths/krishnamurthy_number.py)
* [Kth Lexicographic Permutation](maths/kth_lexicographic_permutation.py)
* [Largest Of Very Large Numbers](maths/largest_of_very_large_numbers.py)
* [Least Common Multiple](maths/least_common_multiple.py)
* [Line Length](maths/line_length.py)
* [Liouville Lambda](maths/liouville_lambda.py)
* [Lucas Lehmer Primality Test](maths/lucas_lehmer_primality_test.py)
* [Lucas Series](maths/lucas_series.py)
* [Maclaurin Series](maths/maclaurin_series.py)
* [Manhattan Distance](maths/manhattan_distance.py)
* [Matrix Exponentiation](maths/matrix_exponentiation.py)
* [Max Sum Sliding Window](maths/max_sum_sliding_window.py)
* [Median Of Two Arrays](maths/median_of_two_arrays.py)
* [Mobius Function](maths/mobius_function.py)
* [Modular Exponential](maths/modular_exponential.py)
* [Monte Carlo](maths/monte_carlo.py)
* [Monte Carlo Dice](maths/monte_carlo_dice.py)
* [Nevilles Method](maths/nevilles_method.py)
* [Newton Raphson](maths/newton_raphson.py)
* [Number Of Digits](maths/number_of_digits.py)
* [Numerical Integration](maths/numerical_integration.py)
* [Odd Sieve](maths/odd_sieve.py)
* [Perfect Cube](maths/perfect_cube.py)
* [Perfect Number](maths/perfect_number.py)
* [Perfect Square](maths/perfect_square.py)
* [Persistence](maths/persistence.py)
* [Pi Generator](maths/pi_generator.py)
* [Pi Monte Carlo Estimation](maths/pi_monte_carlo_estimation.py)
* [Points Are Collinear 3D](maths/points_are_collinear_3d.py)
* [Pollard Rho](maths/pollard_rho.py)
* [Polygonal Numbers](maths/polygonal_numbers.py)
* [Polynomial Evaluation](maths/polynomial_evaluation.py)
* Polynomials
* [Single Indeterminate Operations](maths/polynomials/single_indeterminate_operations.py)
* [Power Using Recursion](maths/power_using_recursion.py)
* [Prime Check](maths/prime_check.py)
* [Prime Factors](maths/prime_factors.py)
* [Prime Numbers](maths/prime_numbers.py)
* [Prime Sieve Eratosthenes](maths/prime_sieve_eratosthenes.py)
* [Primelib](maths/primelib.py)
* [Print Multiplication Table](maths/print_multiplication_table.py)
* [Pronic Number](maths/pronic_number.py)
* [Proth Number](maths/proth_number.py)
* [Pythagoras](maths/pythagoras.py)
* [Qr Decomposition](maths/qr_decomposition.py)
* [Quadratic Equations Complex Numbers](maths/quadratic_equations_complex_numbers.py)
* [Radians](maths/radians.py)
* [Radix2 Fft](maths/radix2_fft.py)
* [Relu](maths/relu.py)
* [Remove Digit](maths/remove_digit.py)
* [Runge Kutta](maths/runge_kutta.py)
* [Segmented Sieve](maths/segmented_sieve.py)
* Series
* [Arithmetic](maths/series/arithmetic.py)
* [Geometric](maths/series/geometric.py)
* [Geometric Series](maths/series/geometric_series.py)
* [Harmonic](maths/series/harmonic.py)
* [Harmonic Series](maths/series/harmonic_series.py)
* [Hexagonal Numbers](maths/series/hexagonal_numbers.py)
* [P Series](maths/series/p_series.py)
* [Sieve Of Eratosthenes](maths/sieve_of_eratosthenes.py)
* [Sigmoid](maths/sigmoid.py)
* [Sigmoid Linear Unit](maths/sigmoid_linear_unit.py)
* [Signum](maths/signum.py)
* [Simpson Rule](maths/simpson_rule.py)
* [Simultaneous Linear Equation Solver](maths/simultaneous_linear_equation_solver.py)
* [Sin](maths/sin.py)
* [Sock Merchant](maths/sock_merchant.py)
* [Softmax](maths/softmax.py)
* [Square Root](maths/square_root.py)
* [Sum Of Arithmetic Series](maths/sum_of_arithmetic_series.py)
* [Sum Of Digits](maths/sum_of_digits.py)
* [Sum Of Geometric Progression](maths/sum_of_geometric_progression.py)
* [Sum Of Harmonic Series](maths/sum_of_harmonic_series.py)
* [Sumset](maths/sumset.py)
* [Sylvester Sequence](maths/sylvester_sequence.py)
* [Tanh](maths/tanh.py)
* [Test Prime Check](maths/test_prime_check.py)
* [Three Sum](maths/three_sum.py)
* [Trapezoidal Rule](maths/trapezoidal_rule.py)
* [Triplet Sum](maths/triplet_sum.py)
* [Twin Prime](maths/twin_prime.py)
* [Two Pointer](maths/two_pointer.py)
* [Two Sum](maths/two_sum.py)
* [Ugly Numbers](maths/ugly_numbers.py)
* [Volume](maths/volume.py)
* [Weird Number](maths/weird_number.py)
* [Zellers Congruence](maths/zellers_congruence.py)
## Matrix
* [Binary Search Matrix](matrix/binary_search_matrix.py)
* [Count Islands In Matrix](matrix/count_islands_in_matrix.py)
* [Count Negative Numbers In Sorted Matrix](matrix/count_negative_numbers_in_sorted_matrix.py)
* [Count Paths](matrix/count_paths.py)
* [Cramers Rule 2X2](matrix/cramers_rule_2x2.py)
* [Inverse Of Matrix](matrix/inverse_of_matrix.py)
* [Largest Square Area In Matrix](matrix/largest_square_area_in_matrix.py)
* [Matrix Class](matrix/matrix_class.py)
* [Matrix Operation](matrix/matrix_operation.py)
* [Max Area Of Island](matrix/max_area_of_island.py)
* [Nth Fibonacci Using Matrix Exponentiation](matrix/nth_fibonacci_using_matrix_exponentiation.py)
* [Pascal Triangle](matrix/pascal_triangle.py)
* [Rotate Matrix](matrix/rotate_matrix.py)
* [Searching In Sorted Matrix](matrix/searching_in_sorted_matrix.py)
* [Sherman Morrison](matrix/sherman_morrison.py)
* [Spiral Print](matrix/spiral_print.py)
* Tests
* [Test Matrix Operation](matrix/tests/test_matrix_operation.py)
## Networking Flow
* [Ford Fulkerson](networking_flow/ford_fulkerson.py)
* [Minimum Cut](networking_flow/minimum_cut.py)
## Neural Network
* [2 Hidden Layers Neural Network](neural_network/2_hidden_layers_neural_network.py)
* Activation Functions
* [Exponential Linear Unit](neural_network/activation_functions/exponential_linear_unit.py)
* [Leaky Rectified Linear Unit](neural_network/activation_functions/leaky_rectified_linear_unit.py)
* [Scaled Exponential Linear Unit](neural_network/activation_functions/scaled_exponential_linear_unit.py)
* [Back Propagation Neural Network](neural_network/back_propagation_neural_network.py)
* [Convolution Neural Network](neural_network/convolution_neural_network.py)
* [Perceptron](neural_network/perceptron.py)
* [Simple Neural Network](neural_network/simple_neural_network.py)
## Other
* [Activity Selection](other/activity_selection.py)
* [Alternative List Arrange](other/alternative_list_arrange.py)
* [Davisb Putnamb Logemannb Loveland](other/davisb_putnamb_logemannb_loveland.py)
* [Dijkstra Bankers Algorithm](other/dijkstra_bankers_algorithm.py)
* [Doomsday](other/doomsday.py)
* [Fischer Yates Shuffle](other/fischer_yates_shuffle.py)
* [Gauss Easter](other/gauss_easter.py)
* [Graham Scan](other/graham_scan.py)
* [Greedy](other/greedy.py)
* [Guess The Number Search](other/guess_the_number_search.py)
* [H Index](other/h_index.py)
* [Least Recently Used](other/least_recently_used.py)
* [Lfu Cache](other/lfu_cache.py)
* [Linear Congruential Generator](other/linear_congruential_generator.py)
* [Lru Cache](other/lru_cache.py)
* [Magicdiamondpattern](other/magicdiamondpattern.py)
* [Maximum Subsequence](other/maximum_subsequence.py)
* [Nested Brackets](other/nested_brackets.py)
* [Number Container System](other/number_container_system.py)
* [Password](other/password.py)
* [Quine](other/quine.py)
* [Scoring Algorithm](other/scoring_algorithm.py)
* [Sdes](other/sdes.py)
* [Tower Of Hanoi](other/tower_of_hanoi.py)
* [Word Search](other/word_search.py)
## Physics
* [Altitude Pressure](physics/altitude_pressure.py)
* [Archimedes Principle](physics/archimedes_principle.py)
* [Basic Orbital Capture](physics/basic_orbital_capture.py)
* [Casimir Effect](physics/casimir_effect.py)
* [Centripetal Force](physics/centripetal_force.py)
* [Coulombs Law](physics/coulombs_law.py)
* [Grahams Law](physics/grahams_law.py)
* [Horizontal Projectile Motion](physics/horizontal_projectile_motion.py)
* [Hubble Parameter](physics/hubble_parameter.py)
* [Ideal Gas Law](physics/ideal_gas_law.py)
* [Kinetic Energy](physics/kinetic_energy.py)
* [Lorentz Transformation Four Vector](physics/lorentz_transformation_four_vector.py)
* [Malus Law](physics/malus_law.py)
* [N Body Simulation](physics/n_body_simulation.py)
* [Newtons Law Of Gravitation](physics/newtons_law_of_gravitation.py)
* [Newtons Second Law Of Motion](physics/newtons_second_law_of_motion.py)
* [Potential Energy](physics/potential_energy.py)
* [Rms Speed Of Molecule](physics/rms_speed_of_molecule.py)
* [Shear Stress](physics/shear_stress.py)
* [Speed Of Sound](physics/speed_of_sound.py)
## Project Euler
* Problem 001
* [Sol1](project_euler/problem_001/sol1.py)
* [Sol2](project_euler/problem_001/sol2.py)
* [Sol3](project_euler/problem_001/sol3.py)
* [Sol4](project_euler/problem_001/sol4.py)
* [Sol5](project_euler/problem_001/sol5.py)
* [Sol6](project_euler/problem_001/sol6.py)
* [Sol7](project_euler/problem_001/sol7.py)
* Problem 002
* [Sol1](project_euler/problem_002/sol1.py)
* [Sol2](project_euler/problem_002/sol2.py)
* [Sol3](project_euler/problem_002/sol3.py)
* [Sol4](project_euler/problem_002/sol4.py)
* [Sol5](project_euler/problem_002/sol5.py)
* Problem 003
* [Sol1](project_euler/problem_003/sol1.py)
* [Sol2](project_euler/problem_003/sol2.py)
* [Sol3](project_euler/problem_003/sol3.py)
* Problem 004
* [Sol1](project_euler/problem_004/sol1.py)
* [Sol2](project_euler/problem_004/sol2.py)
* Problem 005
* [Sol1](project_euler/problem_005/sol1.py)
* [Sol2](project_euler/problem_005/sol2.py)
* Problem 006
* [Sol1](project_euler/problem_006/sol1.py)
* [Sol2](project_euler/problem_006/sol2.py)
* [Sol3](project_euler/problem_006/sol3.py)
* [Sol4](project_euler/problem_006/sol4.py)
* Problem 007
* [Sol1](project_euler/problem_007/sol1.py)
* [Sol2](project_euler/problem_007/sol2.py)
* [Sol3](project_euler/problem_007/sol3.py)
* Problem 008
* [Sol1](project_euler/problem_008/sol1.py)
* [Sol2](project_euler/problem_008/sol2.py)
* [Sol3](project_euler/problem_008/sol3.py)
* Problem 009
* [Sol1](project_euler/problem_009/sol1.py)
* [Sol2](project_euler/problem_009/sol2.py)
* [Sol3](project_euler/problem_009/sol3.py)
* Problem 010
* [Sol1](project_euler/problem_010/sol1.py)
* [Sol2](project_euler/problem_010/sol2.py)
* [Sol3](project_euler/problem_010/sol3.py)
* Problem 011
* [Sol1](project_euler/problem_011/sol1.py)
* [Sol2](project_euler/problem_011/sol2.py)
* Problem 012
* [Sol1](project_euler/problem_012/sol1.py)
* [Sol2](project_euler/problem_012/sol2.py)
* Problem 013
* [Sol1](project_euler/problem_013/sol1.py)
* Problem 014
* [Sol1](project_euler/problem_014/sol1.py)
* [Sol2](project_euler/problem_014/sol2.py)
* Problem 015
* [Sol1](project_euler/problem_015/sol1.py)
* Problem 016
* [Sol1](project_euler/problem_016/sol1.py)
* [Sol2](project_euler/problem_016/sol2.py)
* Problem 017
* [Sol1](project_euler/problem_017/sol1.py)
* Problem 018
* [Solution](project_euler/problem_018/solution.py)
* Problem 019
* [Sol1](project_euler/problem_019/sol1.py)
* Problem 020
* [Sol1](project_euler/problem_020/sol1.py)
* [Sol2](project_euler/problem_020/sol2.py)
* [Sol3](project_euler/problem_020/sol3.py)
* [Sol4](project_euler/problem_020/sol4.py)
* Problem 021
* [Sol1](project_euler/problem_021/sol1.py)
* Problem 022
* [Sol1](project_euler/problem_022/sol1.py)
* [Sol2](project_euler/problem_022/sol2.py)
* Problem 023
* [Sol1](project_euler/problem_023/sol1.py)
* Problem 024
* [Sol1](project_euler/problem_024/sol1.py)
* Problem 025
* [Sol1](project_euler/problem_025/sol1.py)
* [Sol2](project_euler/problem_025/sol2.py)
* [Sol3](project_euler/problem_025/sol3.py)
* Problem 026
* [Sol1](project_euler/problem_026/sol1.py)
* Problem 027
* [Sol1](project_euler/problem_027/sol1.py)
* Problem 028
* [Sol1](project_euler/problem_028/sol1.py)
* Problem 029
* [Sol1](project_euler/problem_029/sol1.py)
* Problem 030
* [Sol1](project_euler/problem_030/sol1.py)
* Problem 031
* [Sol1](project_euler/problem_031/sol1.py)
* [Sol2](project_euler/problem_031/sol2.py)
* Problem 032
* [Sol32](project_euler/problem_032/sol32.py)
* Problem 033
* [Sol1](project_euler/problem_033/sol1.py)
* Problem 034
* [Sol1](project_euler/problem_034/sol1.py)
* Problem 035
* [Sol1](project_euler/problem_035/sol1.py)
* Problem 036
* [Sol1](project_euler/problem_036/sol1.py)
* Problem 037
* [Sol1](project_euler/problem_037/sol1.py)
* Problem 038
* [Sol1](project_euler/problem_038/sol1.py)
* Problem 039
* [Sol1](project_euler/problem_039/sol1.py)
* Problem 040
* [Sol1](project_euler/problem_040/sol1.py)
* Problem 041
* [Sol1](project_euler/problem_041/sol1.py)
* Problem 042
* [Solution42](project_euler/problem_042/solution42.py)
* Problem 043
* [Sol1](project_euler/problem_043/sol1.py)
* Problem 044
* [Sol1](project_euler/problem_044/sol1.py)
* Problem 045
* [Sol1](project_euler/problem_045/sol1.py)
* Problem 046
* [Sol1](project_euler/problem_046/sol1.py)
* Problem 047
* [Sol1](project_euler/problem_047/sol1.py)
* Problem 048
* [Sol1](project_euler/problem_048/sol1.py)
* Problem 049
* [Sol1](project_euler/problem_049/sol1.py)
* Problem 050
* [Sol1](project_euler/problem_050/sol1.py)
* Problem 051
* [Sol1](project_euler/problem_051/sol1.py)
* Problem 052
* [Sol1](project_euler/problem_052/sol1.py)
* Problem 053
* [Sol1](project_euler/problem_053/sol1.py)
* Problem 054
* [Sol1](project_euler/problem_054/sol1.py)
* [Test Poker Hand](project_euler/problem_054/test_poker_hand.py)
* Problem 055
* [Sol1](project_euler/problem_055/sol1.py)
* Problem 056
* [Sol1](project_euler/problem_056/sol1.py)
* Problem 057
* [Sol1](project_euler/problem_057/sol1.py)
* Problem 058
* [Sol1](project_euler/problem_058/sol1.py)
* Problem 059
* [Sol1](project_euler/problem_059/sol1.py)
* Problem 062
* [Sol1](project_euler/problem_062/sol1.py)
* Problem 063
* [Sol1](project_euler/problem_063/sol1.py)
* Problem 064
* [Sol1](project_euler/problem_064/sol1.py)
* Problem 065
* [Sol1](project_euler/problem_065/sol1.py)
* Problem 067
* [Sol1](project_euler/problem_067/sol1.py)
* [Sol2](project_euler/problem_067/sol2.py)
* Problem 068
* [Sol1](project_euler/problem_068/sol1.py)
* Problem 069
* [Sol1](project_euler/problem_069/sol1.py)
* Problem 070
* [Sol1](project_euler/problem_070/sol1.py)
* Problem 071
* [Sol1](project_euler/problem_071/sol1.py)
* Problem 072
* [Sol1](project_euler/problem_072/sol1.py)
* [Sol2](project_euler/problem_072/sol2.py)
* Problem 073
* [Sol1](project_euler/problem_073/sol1.py)
* Problem 074
* [Sol1](project_euler/problem_074/sol1.py)
* [Sol2](project_euler/problem_074/sol2.py)
* Problem 075
* [Sol1](project_euler/problem_075/sol1.py)
* Problem 076
* [Sol1](project_euler/problem_076/sol1.py)
* Problem 077
* [Sol1](project_euler/problem_077/sol1.py)
* Problem 078
* [Sol1](project_euler/problem_078/sol1.py)
* Problem 079
* [Sol1](project_euler/problem_079/sol1.py)
* Problem 080
* [Sol1](project_euler/problem_080/sol1.py)
* Problem 081
* [Sol1](project_euler/problem_081/sol1.py)
* Problem 082
* [Sol1](project_euler/problem_082/sol1.py)
* Problem 085
* [Sol1](project_euler/problem_085/sol1.py)
* Problem 086
* [Sol1](project_euler/problem_086/sol1.py)
* Problem 087
* [Sol1](project_euler/problem_087/sol1.py)
* Problem 089
* [Sol1](project_euler/problem_089/sol1.py)
* Problem 091
* [Sol1](project_euler/problem_091/sol1.py)
* Problem 092
* [Sol1](project_euler/problem_092/sol1.py)
* Problem 094
* [Sol1](project_euler/problem_094/sol1.py)
* Problem 097
* [Sol1](project_euler/problem_097/sol1.py)
* Problem 099
* [Sol1](project_euler/problem_099/sol1.py)
* Problem 100
* [Sol1](project_euler/problem_100/sol1.py)
* Problem 101
* [Sol1](project_euler/problem_101/sol1.py)
* Problem 102
* [Sol1](project_euler/problem_102/sol1.py)
* Problem 104
* [Sol1](project_euler/problem_104/sol1.py)
* Problem 107
* [Sol1](project_euler/problem_107/sol1.py)
* Problem 109
* [Sol1](project_euler/problem_109/sol1.py)
* Problem 112
* [Sol1](project_euler/problem_112/sol1.py)
* Problem 113
* [Sol1](project_euler/problem_113/sol1.py)
* Problem 114
* [Sol1](project_euler/problem_114/sol1.py)
* Problem 115
* [Sol1](project_euler/problem_115/sol1.py)
* Problem 116
* [Sol1](project_euler/problem_116/sol1.py)
* Problem 117
* [Sol1](project_euler/problem_117/sol1.py)
* Problem 119
* [Sol1](project_euler/problem_119/sol1.py)
* Problem 120
* [Sol1](project_euler/problem_120/sol1.py)
* Problem 121
* [Sol1](project_euler/problem_121/sol1.py)
* Problem 123
* [Sol1](project_euler/problem_123/sol1.py)
* Problem 125
* [Sol1](project_euler/problem_125/sol1.py)
* Problem 129
* [Sol1](project_euler/problem_129/sol1.py)
* Problem 131
* [Sol1](project_euler/problem_131/sol1.py)
* Problem 135
* [Sol1](project_euler/problem_135/sol1.py)
* Problem 144
* [Sol1](project_euler/problem_144/sol1.py)
* Problem 145
* [Sol1](project_euler/problem_145/sol1.py)
* Problem 173
* [Sol1](project_euler/problem_173/sol1.py)
* Problem 174
* [Sol1](project_euler/problem_174/sol1.py)
* Problem 180
* [Sol1](project_euler/problem_180/sol1.py)
* Problem 187
* [Sol1](project_euler/problem_187/sol1.py)
* Problem 188
* [Sol1](project_euler/problem_188/sol1.py)
* Problem 191
* [Sol1](project_euler/problem_191/sol1.py)
* Problem 203
* [Sol1](project_euler/problem_203/sol1.py)
* Problem 205
* [Sol1](project_euler/problem_205/sol1.py)
* Problem 206
* [Sol1](project_euler/problem_206/sol1.py)
* Problem 207
* [Sol1](project_euler/problem_207/sol1.py)
* Problem 234
* [Sol1](project_euler/problem_234/sol1.py)
* Problem 301
* [Sol1](project_euler/problem_301/sol1.py)
* Problem 493
* [Sol1](project_euler/problem_493/sol1.py)
* Problem 551
* [Sol1](project_euler/problem_551/sol1.py)
* Problem 587
* [Sol1](project_euler/problem_587/sol1.py)
* Problem 686
* [Sol1](project_euler/problem_686/sol1.py)
* Problem 800
* [Sol1](project_euler/problem_800/sol1.py)
## Quantum
* [Q Fourier Transform](quantum/q_fourier_transform.py)
## Scheduling
* [First Come First Served](scheduling/first_come_first_served.py)
* [Highest Response Ratio Next](scheduling/highest_response_ratio_next.py)
* [Job Sequencing With Deadline](scheduling/job_sequencing_with_deadline.py)
* [Multi Level Feedback Queue](scheduling/multi_level_feedback_queue.py)
* [Non Preemptive Shortest Job First](scheduling/non_preemptive_shortest_job_first.py)
* [Round Robin](scheduling/round_robin.py)
* [Shortest Job First](scheduling/shortest_job_first.py)
## Searches
* [Binary Search](searches/binary_search.py)
* [Binary Tree Traversal](searches/binary_tree_traversal.py)
* [Double Linear Search](searches/double_linear_search.py)
* [Double Linear Search Recursion](searches/double_linear_search_recursion.py)
* [Fibonacci Search](searches/fibonacci_search.py)
* [Hill Climbing](searches/hill_climbing.py)
* [Interpolation Search](searches/interpolation_search.py)
* [Jump Search](searches/jump_search.py)
* [Linear Search](searches/linear_search.py)
* [Quick Select](searches/quick_select.py)
* [Sentinel Linear Search](searches/sentinel_linear_search.py)
* [Simple Binary Search](searches/simple_binary_search.py)
* [Simulated Annealing](searches/simulated_annealing.py)
* [Tabu Search](searches/tabu_search.py)
* [Ternary Search](searches/ternary_search.py)
## Sorts
* [Bead Sort](sorts/bead_sort.py)
* [Binary Insertion Sort](sorts/binary_insertion_sort.py)
* [Bitonic Sort](sorts/bitonic_sort.py)
* [Bogo Sort](sorts/bogo_sort.py)
* [Bubble Sort](sorts/bubble_sort.py)
* [Bucket Sort](sorts/bucket_sort.py)
* [Circle Sort](sorts/circle_sort.py)
* [Cocktail Shaker Sort](sorts/cocktail_shaker_sort.py)
* [Comb Sort](sorts/comb_sort.py)
* [Counting Sort](sorts/counting_sort.py)
* [Cycle Sort](sorts/cycle_sort.py)
* [Double Sort](sorts/double_sort.py)
* [Dutch National Flag Sort](sorts/dutch_national_flag_sort.py)
* [Exchange Sort](sorts/exchange_sort.py)
* [External Sort](sorts/external_sort.py)
* [Gnome Sort](sorts/gnome_sort.py)
* [Heap Sort](sorts/heap_sort.py)
* [Insertion Sort](sorts/insertion_sort.py)
* [Intro Sort](sorts/intro_sort.py)
* [Iterative Merge Sort](sorts/iterative_merge_sort.py)
* [Merge Insertion Sort](sorts/merge_insertion_sort.py)
* [Merge Sort](sorts/merge_sort.py)
* [Msd Radix Sort](sorts/msd_radix_sort.py)
* [Natural Sort](sorts/natural_sort.py)
* [Odd Even Sort](sorts/odd_even_sort.py)
* [Odd Even Transposition Parallel](sorts/odd_even_transposition_parallel.py)
* [Odd Even Transposition Single Threaded](sorts/odd_even_transposition_single_threaded.py)
* [Pancake Sort](sorts/pancake_sort.py)
* [Patience Sort](sorts/patience_sort.py)
* [Pigeon Sort](sorts/pigeon_sort.py)
* [Pigeonhole Sort](sorts/pigeonhole_sort.py)
* [Quick Sort](sorts/quick_sort.py)
* [Quick Sort 3 Partition](sorts/quick_sort_3_partition.py)
* [Radix Sort](sorts/radix_sort.py)
* [Recursive Bubble Sort](sorts/recursive_bubble_sort.py)
* [Recursive Insertion Sort](sorts/recursive_insertion_sort.py)
* [Recursive Mergesort Array](sorts/recursive_mergesort_array.py)
* [Recursive Quick Sort](sorts/recursive_quick_sort.py)
* [Selection Sort](sorts/selection_sort.py)
* [Shell Sort](sorts/shell_sort.py)
* [Shrink Shell Sort](sorts/shrink_shell_sort.py)
* [Slowsort](sorts/slowsort.py)
* [Stooge Sort](sorts/stooge_sort.py)
* [Strand Sort](sorts/strand_sort.py)
* [Tim Sort](sorts/tim_sort.py)
* [Topological Sort](sorts/topological_sort.py)
* [Tree Sort](sorts/tree_sort.py)
* [Unknown Sort](sorts/unknown_sort.py)
* [Wiggle Sort](sorts/wiggle_sort.py)
## Strings
* [Aho Corasick](strings/aho_corasick.py)
* [Alternative String Arrange](strings/alternative_string_arrange.py)
* [Anagrams](strings/anagrams.py)
* [Autocomplete Using Trie](strings/autocomplete_using_trie.py)
* [Barcode Validator](strings/barcode_validator.py)
* [Boyer Moore Search](strings/boyer_moore_search.py)
* [Can String Be Rearranged As Palindrome](strings/can_string_be_rearranged_as_palindrome.py)
* [Capitalize](strings/capitalize.py)
* [Check Anagrams](strings/check_anagrams.py)
* [Credit Card Validator](strings/credit_card_validator.py)
* [Detecting English Programmatically](strings/detecting_english_programmatically.py)
* [Dna](strings/dna.py)
* [Frequency Finder](strings/frequency_finder.py)
* [Hamming Distance](strings/hamming_distance.py)
* [Indian Phone Validator](strings/indian_phone_validator.py)
* [Is Contains Unique Chars](strings/is_contains_unique_chars.py)
* [Is Isogram](strings/is_isogram.py)
* [Is Pangram](strings/is_pangram.py)
* [Is Spain National Id](strings/is_spain_national_id.py)
* [Is Srilankan Phone Number](strings/is_srilankan_phone_number.py)
* [Is Valid Email Address](strings/is_valid_email_address.py)
* [Jaro Winkler](strings/jaro_winkler.py)
* [Join](strings/join.py)
* [Knuth Morris Pratt](strings/knuth_morris_pratt.py)
* [Levenshtein Distance](strings/levenshtein_distance.py)
* [Lower](strings/lower.py)
* [Manacher](strings/manacher.py)
* [Min Cost String Conversion](strings/min_cost_string_conversion.py)
* [Naive String Search](strings/naive_string_search.py)
* [Ngram](strings/ngram.py)
* [Palindrome](strings/palindrome.py)
* [Prefix Function](strings/prefix_function.py)
* [Rabin Karp](strings/rabin_karp.py)
* [Remove Duplicate](strings/remove_duplicate.py)
* [Reverse Letters](strings/reverse_letters.py)
* [Reverse Long Words](strings/reverse_long_words.py)
* [Reverse Words](strings/reverse_words.py)
* [Snake Case To Camel Pascal Case](strings/snake_case_to_camel_pascal_case.py)
* [Split](strings/split.py)
* [String Switch Case](strings/string_switch_case.py)
* [Text Justification](strings/text_justification.py)
* [Top K Frequent Words](strings/top_k_frequent_words.py)
* [Upper](strings/upper.py)
* [Wave](strings/wave.py)
* [Wildcard Pattern Matching](strings/wildcard_pattern_matching.py)
* [Word Occurrence](strings/word_occurrence.py)
* [Word Patterns](strings/word_patterns.py)
* [Z Function](strings/z_function.py)
## Web Programming
* [Co2 Emission](web_programming/co2_emission.py)
* [Covid Stats Via Xpath](web_programming/covid_stats_via_xpath.py)
* [Crawl Google Results](web_programming/crawl_google_results.py)
* [Crawl Google Scholar Citation](web_programming/crawl_google_scholar_citation.py)
* [Currency Converter](web_programming/currency_converter.py)
* [Current Stock Price](web_programming/current_stock_price.py)
* [Current Weather](web_programming/current_weather.py)
* [Daily Horoscope](web_programming/daily_horoscope.py)
* [Download Images From Google Query](web_programming/download_images_from_google_query.py)
* [Emails From Url](web_programming/emails_from_url.py)
* [Fetch Anime And Play](web_programming/fetch_anime_and_play.py)
* [Fetch Bbc News](web_programming/fetch_bbc_news.py)
* [Fetch Github Info](web_programming/fetch_github_info.py)
* [Fetch Jobs](web_programming/fetch_jobs.py)
* [Fetch Quotes](web_programming/fetch_quotes.py)
* [Fetch Well Rx Price](web_programming/fetch_well_rx_price.py)
* [Get Amazon Product Data](web_programming/get_amazon_product_data.py)
* [Get Imdb Top 250 Movies Csv](web_programming/get_imdb_top_250_movies_csv.py)
* [Get Imdbtop](web_programming/get_imdbtop.py)
* [Get Top Billionaires](web_programming/get_top_billionaires.py)
* [Get Top Hn Posts](web_programming/get_top_hn_posts.py)
* [Get User Tweets](web_programming/get_user_tweets.py)
* [Giphy](web_programming/giphy.py)
* [Instagram Crawler](web_programming/instagram_crawler.py)
* [Instagram Pic](web_programming/instagram_pic.py)
* [Instagram Video](web_programming/instagram_video.py)
* [Nasa Data](web_programming/nasa_data.py)
* [Open Google Results](web_programming/open_google_results.py)
* [Random Anime Character](web_programming/random_anime_character.py)
* [Recaptcha Verification](web_programming/recaptcha_verification.py)
* [Reddit](web_programming/reddit.py)
* [Search Books By Isbn](web_programming/search_books_by_isbn.py)
* [Slack Message](web_programming/slack_message.py)
* [Test Fetch Github Info](web_programming/test_fetch_github_info.py)
* [World Covid19 Stats](web_programming/world_covid19_stats.py)
| 1 |
TheAlgorithms/Python | 9,576 | Upgrade to Python 3.12 | ### Describe your change:
Repeats #8777
Repeats #9306
* #8777
* #9306
**Disables** algorithms based on `qiskit` and `tensorflow` because those modules are not yet compatible with Python 3.12.
* https://github.com/tensorflow/tensorflow/releases
* https://github.com/Qiskit/qiskit/issues/10887
---
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
| cclauss | "2023-10-03T08:13:43Z" | "2023-10-03T09:17:11Z" | f964dcbf2ff7c70e4aca20532a38dfb02ce8a4c0 | 0f4e51245f33175b4fb311f633d3821210741bdd | Upgrade to Python 3.12. ### Describe your change:
Repeats #8777
Repeats #9306
* #8777
* #9306
**Disables** algorithms based on `qiskit` and `tensorflow` because those modules are not yet compatible with Python 3.12.
* https://github.com/tensorflow/tensorflow/releases
* https://github.com/Qiskit/qiskit/issues/10887
---
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
| """
In the Combination Sum problem, we are given a list consisting of distinct integers.
We need to find all the combinations whose sum equals to target given.
We can use an element more than one.
Time complexity(Average Case): O(n!)
Constraints:
1 <= candidates.length <= 30
2 <= candidates[i] <= 40
All elements of candidates are distinct.
1 <= target <= 40
"""
def backtrack(
candidates: list, path: list, answer: list, target: int, previous_index: int
) -> None:
"""
A recursive function that searches for possible combinations. Backtracks in case
of a bigger current combination value than the target value.
Parameters
----------
previous_index: Last index from the previous search
target: The value we need to obtain by summing our integers in the path list.
answer: A list of possible combinations
path: Current combination
candidates: A list of integers we can use.
"""
if target == 0:
answer.append(path.copy())
else:
for index in range(previous_index, len(candidates)):
if target >= candidates[index]:
path.append(candidates[index])
backtrack(candidates, path, answer, target - candidates[index], index)
path.pop(len(path) - 1)
def combination_sum(candidates: list, target: int) -> list:
"""
>>> combination_sum([2, 3, 5], 8)
[[2, 2, 2, 2], [2, 3, 3], [3, 5]]
>>> combination_sum([2, 3, 6, 7], 7)
[[2, 2, 3], [7]]
>>> combination_sum([-8, 2.3, 0], 1)
Traceback (most recent call last):
...
RecursionError: maximum recursion depth exceeded in comparison
"""
path = [] # type: list[int]
answer = [] # type: list[int]
backtrack(candidates, path, answer, target, 0)
return answer
def main() -> None:
print(combination_sum([-8, 2.3, 0], 1))
if __name__ == "__main__":
import doctest
doctest.testmod()
main()
| """
In the Combination Sum problem, we are given a list consisting of distinct integers.
We need to find all the combinations whose sum equals to target given.
We can use an element more than one.
Time complexity(Average Case): O(n!)
Constraints:
1 <= candidates.length <= 30
2 <= candidates[i] <= 40
All elements of candidates are distinct.
1 <= target <= 40
"""
def backtrack(
candidates: list, path: list, answer: list, target: int, previous_index: int
) -> None:
"""
A recursive function that searches for possible combinations. Backtracks in case
of a bigger current combination value than the target value.
Parameters
----------
previous_index: Last index from the previous search
target: The value we need to obtain by summing our integers in the path list.
answer: A list of possible combinations
path: Current combination
candidates: A list of integers we can use.
"""
if target == 0:
answer.append(path.copy())
else:
for index in range(previous_index, len(candidates)):
if target >= candidates[index]:
path.append(candidates[index])
backtrack(candidates, path, answer, target - candidates[index], index)
path.pop(len(path) - 1)
def combination_sum(candidates: list, target: int) -> list:
"""
>>> combination_sum([2, 3, 5], 8)
[[2, 2, 2, 2], [2, 3, 3], [3, 5]]
>>> combination_sum([2, 3, 6, 7], 7)
[[2, 2, 3], [7]]
>>> combination_sum([-8, 2.3, 0], 1)
Traceback (most recent call last):
...
RecursionError: maximum recursion depth exceeded
"""
path = [] # type: list[int]
answer = [] # type: list[int]
backtrack(candidates, path, answer, target, 0)
return answer
def main() -> None:
print(combination_sum([-8, 2.3, 0], 1))
if __name__ == "__main__":
import doctest
doctest.testmod()
main()
| 1 |
TheAlgorithms/Python | 9,576 | Upgrade to Python 3.12 | ### Describe your change:
Repeats #8777
Repeats #9306
* #8777
* #9306
**Disables** algorithms based on `qiskit` and `tensorflow` because those modules are not yet compatible with Python 3.12.
* https://github.com/tensorflow/tensorflow/releases
* https://github.com/Qiskit/qiskit/issues/10887
---
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
| cclauss | "2023-10-03T08:13:43Z" | "2023-10-03T09:17:11Z" | f964dcbf2ff7c70e4aca20532a38dfb02ce8a4c0 | 0f4e51245f33175b4fb311f633d3821210741bdd | Upgrade to Python 3.12. ### Describe your change:
Repeats #8777
Repeats #9306
* #8777
* #9306
**Disables** algorithms based on `qiskit` and `tensorflow` because those modules are not yet compatible with Python 3.12.
* https://github.com/tensorflow/tensorflow/releases
* https://github.com/Qiskit/qiskit/issues/10887
---
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
| """
https://en.wikipedia.org/wiki/Taylor_series#Trigonometric_functions
"""
from math import factorial, pi
def maclaurin_sin(theta: float, accuracy: int = 30) -> float:
"""
Finds the maclaurin approximation of sin
:param theta: the angle to which sin is found
:param accuracy: the degree of accuracy wanted minimum
:return: the value of sine in radians
>>> from math import isclose, sin
>>> all(isclose(maclaurin_sin(x, 50), sin(x)) for x in range(-25, 25))
True
>>> maclaurin_sin(10)
-0.544021110889369
>>> maclaurin_sin(-10)
0.5440211108893703
>>> maclaurin_sin(10, 15)
-0.5440211108893689
>>> maclaurin_sin(-10, 15)
0.5440211108893703
>>> maclaurin_sin("10")
Traceback (most recent call last):
...
ValueError: maclaurin_sin() requires either an int or float for theta
>>> maclaurin_sin(10, -30)
Traceback (most recent call last):
...
ValueError: maclaurin_sin() requires a positive int for accuracy
>>> maclaurin_sin(10, 30.5)
Traceback (most recent call last):
...
ValueError: maclaurin_sin() requires a positive int for accuracy
>>> maclaurin_sin(10, "30")
Traceback (most recent call last):
...
ValueError: maclaurin_sin() requires a positive int for accuracy
"""
if not isinstance(theta, (int, float)):
raise ValueError("maclaurin_sin() requires either an int or float for theta")
if not isinstance(accuracy, int) or accuracy <= 0:
raise ValueError("maclaurin_sin() requires a positive int for accuracy")
theta = float(theta)
div = theta // (2 * pi)
theta -= 2 * div * pi
return sum(
(-1) ** r * theta ** (2 * r + 1) / factorial(2 * r + 1) for r in range(accuracy)
)
def maclaurin_cos(theta: float, accuracy: int = 30) -> float:
"""
Finds the maclaurin approximation of cos
:param theta: the angle to which cos is found
:param accuracy: the degree of accuracy wanted
:return: the value of cosine in radians
>>> from math import isclose, cos
>>> all(isclose(maclaurin_cos(x, 50), cos(x)) for x in range(-25, 25))
True
>>> maclaurin_cos(5)
0.28366218546322675
>>> maclaurin_cos(-5)
0.2836621854632266
>>> maclaurin_cos(10, 15)
-0.8390715290764525
>>> maclaurin_cos(-10, 15)
-0.8390715290764521
>>> maclaurin_cos("10")
Traceback (most recent call last):
...
ValueError: maclaurin_cos() requires either an int or float for theta
>>> maclaurin_cos(10, -30)
Traceback (most recent call last):
...
ValueError: maclaurin_cos() requires a positive int for accuracy
>>> maclaurin_cos(10, 30.5)
Traceback (most recent call last):
...
ValueError: maclaurin_cos() requires a positive int for accuracy
>>> maclaurin_cos(10, "30")
Traceback (most recent call last):
...
ValueError: maclaurin_cos() requires a positive int for accuracy
"""
if not isinstance(theta, (int, float)):
raise ValueError("maclaurin_cos() requires either an int or float for theta")
if not isinstance(accuracy, int) or accuracy <= 0:
raise ValueError("maclaurin_cos() requires a positive int for accuracy")
theta = float(theta)
div = theta // (2 * pi)
theta -= 2 * div * pi
return sum((-1) ** r * theta ** (2 * r) / factorial(2 * r) for r in range(accuracy))
if __name__ == "__main__":
import doctest
doctest.testmod()
print(maclaurin_sin(10))
print(maclaurin_sin(-10))
print(maclaurin_sin(10, 15))
print(maclaurin_sin(-10, 15))
print(maclaurin_cos(5))
print(maclaurin_cos(-5))
print(maclaurin_cos(10, 15))
print(maclaurin_cos(-10, 15))
| """
https://en.wikipedia.org/wiki/Taylor_series#Trigonometric_functions
"""
from math import factorial, pi
def maclaurin_sin(theta: float, accuracy: int = 30) -> float:
"""
Finds the maclaurin approximation of sin
:param theta: the angle to which sin is found
:param accuracy: the degree of accuracy wanted minimum
:return: the value of sine in radians
>>> from math import isclose, sin
>>> all(isclose(maclaurin_sin(x, 50), sin(x)) for x in range(-25, 25))
True
>>> maclaurin_sin(10)
-0.5440211108893691
>>> maclaurin_sin(-10)
0.5440211108893704
>>> maclaurin_sin(10, 15)
-0.5440211108893689
>>> maclaurin_sin(-10, 15)
0.5440211108893703
>>> maclaurin_sin("10")
Traceback (most recent call last):
...
ValueError: maclaurin_sin() requires either an int or float for theta
>>> maclaurin_sin(10, -30)
Traceback (most recent call last):
...
ValueError: maclaurin_sin() requires a positive int for accuracy
>>> maclaurin_sin(10, 30.5)
Traceback (most recent call last):
...
ValueError: maclaurin_sin() requires a positive int for accuracy
>>> maclaurin_sin(10, "30")
Traceback (most recent call last):
...
ValueError: maclaurin_sin() requires a positive int for accuracy
"""
if not isinstance(theta, (int, float)):
raise ValueError("maclaurin_sin() requires either an int or float for theta")
if not isinstance(accuracy, int) or accuracy <= 0:
raise ValueError("maclaurin_sin() requires a positive int for accuracy")
theta = float(theta)
div = theta // (2 * pi)
theta -= 2 * div * pi
return sum(
(-1) ** r * theta ** (2 * r + 1) / factorial(2 * r + 1) for r in range(accuracy)
)
def maclaurin_cos(theta: float, accuracy: int = 30) -> float:
"""
Finds the maclaurin approximation of cos
:param theta: the angle to which cos is found
:param accuracy: the degree of accuracy wanted
:return: the value of cosine in radians
>>> from math import isclose, cos
>>> all(isclose(maclaurin_cos(x, 50), cos(x)) for x in range(-25, 25))
True
>>> maclaurin_cos(5)
0.2836621854632268
>>> maclaurin_cos(-5)
0.2836621854632265
>>> maclaurin_cos(10, 15)
-0.8390715290764525
>>> maclaurin_cos(-10, 15)
-0.8390715290764521
>>> maclaurin_cos("10")
Traceback (most recent call last):
...
ValueError: maclaurin_cos() requires either an int or float for theta
>>> maclaurin_cos(10, -30)
Traceback (most recent call last):
...
ValueError: maclaurin_cos() requires a positive int for accuracy
>>> maclaurin_cos(10, 30.5)
Traceback (most recent call last):
...
ValueError: maclaurin_cos() requires a positive int for accuracy
>>> maclaurin_cos(10, "30")
Traceback (most recent call last):
...
ValueError: maclaurin_cos() requires a positive int for accuracy
"""
if not isinstance(theta, (int, float)):
raise ValueError("maclaurin_cos() requires either an int or float for theta")
if not isinstance(accuracy, int) or accuracy <= 0:
raise ValueError("maclaurin_cos() requires a positive int for accuracy")
theta = float(theta)
div = theta // (2 * pi)
theta -= 2 * div * pi
return sum((-1) ** r * theta ** (2 * r) / factorial(2 * r) for r in range(accuracy))
if __name__ == "__main__":
import doctest
doctest.testmod()
print(maclaurin_sin(10))
print(maclaurin_sin(-10))
print(maclaurin_sin(10, 15))
print(maclaurin_sin(-10, 15))
print(maclaurin_cos(5))
print(maclaurin_cos(-5))
print(maclaurin_cos(10, 15))
print(maclaurin_cos(-10, 15))
| 1 |
TheAlgorithms/Python | 9,576 | Upgrade to Python 3.12 | ### Describe your change:
Repeats #8777
Repeats #9306
* #8777
* #9306
**Disables** algorithms based on `qiskit` and `tensorflow` because those modules are not yet compatible with Python 3.12.
* https://github.com/tensorflow/tensorflow/releases
* https://github.com/Qiskit/qiskit/issues/10887
---
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
| cclauss | "2023-10-03T08:13:43Z" | "2023-10-03T09:17:11Z" | f964dcbf2ff7c70e4aca20532a38dfb02ce8a4c0 | 0f4e51245f33175b4fb311f633d3821210741bdd | Upgrade to Python 3.12. ### Describe your change:
Repeats #8777
Repeats #9306
* #8777
* #9306
**Disables** algorithms based on `qiskit` and `tensorflow` because those modules are not yet compatible with Python 3.12.
* https://github.com/tensorflow/tensorflow/releases
* https://github.com/Qiskit/qiskit/issues/10887
---
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
| beautifulsoup4
fake_useragent
imageio
keras
lxml
matplotlib
numpy
opencv-python
pandas
pillow
projectq
qiskit
qiskit-aer
requests
rich
scikit-fuzzy
scikit-learn
statsmodels
sympy
tensorflow
texttable
tweepy
xgboost
yulewalker
| beautifulsoup4
fake_useragent
imageio
keras
lxml
matplotlib
numpy
opencv-python
pandas
pillow
projectq
qiskit ; python_version < '3.12'
qiskit-aer ; python_version < '3.12'
requests
rich
scikit-fuzzy
scikit-learn
statsmodels
sympy
tensorflow ; python_version < '3.12'
texttable
tweepy
xgboost
yulewalker
| 1 |
TheAlgorithms/Python | 9,576 | Upgrade to Python 3.12 | ### Describe your change:
Repeats #8777
Repeats #9306
* #8777
* #9306
**Disables** algorithms based on `qiskit` and `tensorflow` because those modules are not yet compatible with Python 3.12.
* https://github.com/tensorflow/tensorflow/releases
* https://github.com/Qiskit/qiskit/issues/10887
---
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
| cclauss | "2023-10-03T08:13:43Z" | "2023-10-03T09:17:11Z" | f964dcbf2ff7c70e4aca20532a38dfb02ce8a4c0 | 0f4e51245f33175b4fb311f633d3821210741bdd | Upgrade to Python 3.12. ### Describe your change:
Repeats #8777
Repeats #9306
* #8777
* #9306
**Disables** algorithms based on `qiskit` and `tensorflow` because those modules are not yet compatible with Python 3.12.
* https://github.com/tensorflow/tensorflow/releases
* https://github.com/Qiskit/qiskit/issues/10887
---
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
| #!/usr/bin/env python3
"""
Simulation of the Quantum Key Distribution (QKD) protocol called BB84,
created by Charles Bennett and Gilles Brassard in 1984.
BB84 is a key-distribution protocol that ensures secure key distribution
using qubits instead of classical bits. The generated key is the result
of simulating a quantum circuit. Our algorithm to construct the circuit
is as follows:
Alice generates two binary strings. One encodes the basis for each qubit:
- 0 -> {0,1} basis.
- 1 -> {+,-} basis.
The other encodes the state:
- 0 -> |0> or |+>.
- 1 -> |1> or |->.
Bob also generates a binary string and uses the same convention to choose
a basis for measurement. Based on the following results, we follow the
algorithm below:
X|0> = |1>
H|0> = |+>
HX|0> = |->
1. Whenever Alice wants to encode 1 in a qubit, she applies an
X (NOT) gate to the qubit. To encode 0, no action is needed.
2. Wherever she wants to encode it in the {+,-} basis, she applies
an H (Hadamard) gate. No action is necessary to encode a qubit in
the {0,1} basis.
3. She then sends the qubits to Bob (symbolically represented in
this circuit using wires).
4. Bob measures the qubits according to his binary string for
measurement. To measure a qubit in the {+,-} basis, he applies
an H gate to the corresponding qubit and then performs a measurement.
References:
https://en.wikipedia.org/wiki/BB84
https://qiskit.org/textbook/ch-algorithms/quantum-key-distribution.html
"""
import numpy as np
import qiskit
def bb84(key_len: int = 8, seed: int | None = None) -> str:
"""
Performs the BB84 protocol using a key made of `key_len` bits.
The two parties in the key distribution are called Alice and Bob.
Args:
key_len: The length of the generated key in bits. The default is 8.
seed: Seed for the random number generator.
Mostly used for testing. Default is None.
Returns:
key: The key generated using BB84 protocol.
>>> bb84(16, seed=0)
'0111110111010010'
>>> bb84(8, seed=0)
'10110001'
"""
# Set up the random number generator.
rng = np.random.default_rng(seed=seed)
# Roughly 25% of the qubits will contribute to the key.
# So we take more than we need.
num_qubits = 6 * key_len
# Measurement basis for Alice's qubits.
alice_basis = rng.integers(2, size=num_qubits)
# The set of states Alice will prepare.
alice_state = rng.integers(2, size=num_qubits)
# Measurement basis for Bob's qubits.
bob_basis = rng.integers(2, size=num_qubits)
# Quantum Circuit to simulate BB84
bb84_circ = qiskit.QuantumCircuit(num_qubits, name="BB84")
# Alice prepares her qubits according to rules above.
for index, _ in enumerate(alice_basis):
if alice_state[index] == 1:
bb84_circ.x(index)
if alice_basis[index] == 1:
bb84_circ.h(index)
bb84_circ.barrier()
# Bob measures the received qubits according to rules above.
for index, _ in enumerate(bob_basis):
if bob_basis[index] == 1:
bb84_circ.h(index)
bb84_circ.barrier()
bb84_circ.measure_all()
# Simulate the quantum circuit.
sim = qiskit.Aer.get_backend("aer_simulator")
# We only need to run one shot because the key is unique.
# Multiple shots will produce the same key.
job = qiskit.execute(bb84_circ, sim, shots=1, seed_simulator=seed)
# Returns the result of measurement.
result = job.result().get_counts(bb84_circ).most_frequent()
# Extracting the generated key from the simulation results.
# Only keep measurement results where Alice and Bob chose the same basis.
gen_key = "".join(
[
result_bit
for alice_basis_bit, bob_basis_bit, result_bit in zip(
alice_basis, bob_basis, result
)
if alice_basis_bit == bob_basis_bit
]
)
# Get final key. Pad with 0 if too short, otherwise truncate.
key = gen_key[:key_len] if len(gen_key) >= key_len else gen_key.ljust(key_len, "0")
return key
if __name__ == "__main__":
print(f"The generated key is : {bb84(8, seed=0)}")
from doctest import testmod
testmod()
| #!/usr/bin/env python3
"""
Simulation of the Quantum Key Distribution (QKD) protocol called BB84,
created by Charles Bennett and Gilles Brassard in 1984.
BB84 is a key-distribution protocol that ensures secure key distribution
using qubits instead of classical bits. The generated key is the result
of simulating a quantum circuit. Our algorithm to construct the circuit
is as follows:
Alice generates two binary strings. One encodes the basis for each qubit:
- 0 -> {0,1} basis.
- 1 -> {+,-} basis.
The other encodes the state:
- 0 -> |0> or |+>.
- 1 -> |1> or |->.
Bob also generates a binary string and uses the same convention to choose
a basis for measurement. Based on the following results, we follow the
algorithm below:
X|0> = |1>
H|0> = |+>
HX|0> = |->
1. Whenever Alice wants to encode 1 in a qubit, she applies an
X (NOT) gate to the qubit. To encode 0, no action is needed.
2. Wherever she wants to encode it in the {+,-} basis, she applies
an H (Hadamard) gate. No action is necessary to encode a qubit in
the {0,1} basis.
3. She then sends the qubits to Bob (symbolically represented in
this circuit using wires).
4. Bob measures the qubits according to his binary string for
measurement. To measure a qubit in the {+,-} basis, he applies
an H gate to the corresponding qubit and then performs a measurement.
References:
https://en.wikipedia.org/wiki/BB84
https://qiskit.org/textbook/ch-algorithms/quantum-key-distribution.html
"""
import numpy as np
import qiskit
def bb84(key_len: int = 8, seed: int | None = None) -> str:
"""
Performs the BB84 protocol using a key made of `key_len` bits.
The two parties in the key distribution are called Alice and Bob.
Args:
key_len: The length of the generated key in bits. The default is 8.
seed: Seed for the random number generator.
Mostly used for testing. Default is None.
Returns:
key: The key generated using BB84 protocol.
>>> bb84(16, seed=0)
'0111110111010010'
>>> bb84(8, seed=0)
'10110001'
"""
# Set up the random number generator.
rng = np.random.default_rng(seed=seed)
# Roughly 25% of the qubits will contribute to the key.
# So we take more than we need.
num_qubits = 6 * key_len
# Measurement basis for Alice's qubits.
alice_basis = rng.integers(2, size=num_qubits)
# The set of states Alice will prepare.
alice_state = rng.integers(2, size=num_qubits)
# Measurement basis for Bob's qubits.
bob_basis = rng.integers(2, size=num_qubits)
# Quantum Circuit to simulate BB84
bb84_circ = qiskit.QuantumCircuit(num_qubits, name="BB84")
# Alice prepares her qubits according to rules above.
for index, _ in enumerate(alice_basis):
if alice_state[index] == 1:
bb84_circ.x(index)
if alice_basis[index] == 1:
bb84_circ.h(index)
bb84_circ.barrier()
# Bob measures the received qubits according to rules above.
for index, _ in enumerate(bob_basis):
if bob_basis[index] == 1:
bb84_circ.h(index)
bb84_circ.barrier()
bb84_circ.measure_all()
# Simulate the quantum circuit.
sim = qiskit.Aer.get_backend("aer_simulator")
# We only need to run one shot because the key is unique.
# Multiple shots will produce the same key.
job = qiskit.execute(bb84_circ, sim, shots=1, seed_simulator=seed)
# Returns the result of measurement.
result = job.result().get_counts(bb84_circ).most_frequent()
# Extracting the generated key from the simulation results.
# Only keep measurement results where Alice and Bob chose the same basis.
gen_key = "".join(
[
result_bit
for alice_basis_bit, bob_basis_bit, result_bit in zip(
alice_basis, bob_basis, result
)
if alice_basis_bit == bob_basis_bit
]
)
# Get final key. Pad with 0 if too short, otherwise truncate.
key = gen_key[:key_len] if len(gen_key) >= key_len else gen_key.ljust(key_len, "0")
return key
if __name__ == "__main__":
print(f"The generated key is : {bb84(8, seed=0)}")
from doctest import testmod
testmod()
| -1 |
TheAlgorithms/Python | 9,576 | Upgrade to Python 3.12 | ### Describe your change:
Repeats #8777
Repeats #9306
* #8777
* #9306
**Disables** algorithms based on `qiskit` and `tensorflow` because those modules are not yet compatible with Python 3.12.
* https://github.com/tensorflow/tensorflow/releases
* https://github.com/Qiskit/qiskit/issues/10887
---
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
| cclauss | "2023-10-03T08:13:43Z" | "2023-10-03T09:17:11Z" | f964dcbf2ff7c70e4aca20532a38dfb02ce8a4c0 | 0f4e51245f33175b4fb311f633d3821210741bdd | Upgrade to Python 3.12. ### Describe your change:
Repeats #8777
Repeats #9306
* #8777
* #9306
**Disables** algorithms based on `qiskit` and `tensorflow` because those modules are not yet compatible with Python 3.12.
* https://github.com/tensorflow/tensorflow/releases
* https://github.com/Qiskit/qiskit/issues/10887
---
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
| from __future__ import annotations
def print_distance(distance: list[float], src):
print(f"Vertex\tShortest Distance from vertex {src}")
for i, d in enumerate(distance):
print(f"{i}\t\t{d}")
def check_negative_cycle(
graph: list[dict[str, int]], distance: list[float], edge_count: int
):
for j in range(edge_count):
u, v, w = (graph[j][k] for k in ["src", "dst", "weight"])
if distance[u] != float("inf") and distance[u] + w < distance[v]:
return True
return False
def bellman_ford(
graph: list[dict[str, int]], vertex_count: int, edge_count: int, src: int
) -> list[float]:
"""
Returns shortest paths from a vertex src to all
other vertices.
>>> edges = [(2, 1, -10), (3, 2, 3), (0, 3, 5), (0, 1, 4)]
>>> g = [{"src": s, "dst": d, "weight": w} for s, d, w in edges]
>>> bellman_ford(g, 4, 4, 0)
[0.0, -2.0, 8.0, 5.0]
>>> g = [{"src": s, "dst": d, "weight": w} for s, d, w in edges + [(1, 3, 5)]]
>>> bellman_ford(g, 4, 5, 0)
Traceback (most recent call last):
...
Exception: Negative cycle found
"""
distance = [float("inf")] * vertex_count
distance[src] = 0.0
for _ in range(vertex_count - 1):
for j in range(edge_count):
u, v, w = (graph[j][k] for k in ["src", "dst", "weight"])
if distance[u] != float("inf") and distance[u] + w < distance[v]:
distance[v] = distance[u] + w
negative_cycle_exists = check_negative_cycle(graph, distance, edge_count)
if negative_cycle_exists:
raise Exception("Negative cycle found")
return distance
if __name__ == "__main__":
import doctest
doctest.testmod()
V = int(input("Enter number of vertices: ").strip())
E = int(input("Enter number of edges: ").strip())
graph: list[dict[str, int]] = [{} for _ in range(E)]
for i in range(E):
print("Edge ", i + 1)
src, dest, weight = (
int(x)
for x in input("Enter source, destination, weight: ").strip().split(" ")
)
graph[i] = {"src": src, "dst": dest, "weight": weight}
source = int(input("\nEnter shortest path source:").strip())
shortest_distance = bellman_ford(graph, V, E, source)
print_distance(shortest_distance, 0)
| from __future__ import annotations
def print_distance(distance: list[float], src):
print(f"Vertex\tShortest Distance from vertex {src}")
for i, d in enumerate(distance):
print(f"{i}\t\t{d}")
def check_negative_cycle(
graph: list[dict[str, int]], distance: list[float], edge_count: int
):
for j in range(edge_count):
u, v, w = (graph[j][k] for k in ["src", "dst", "weight"])
if distance[u] != float("inf") and distance[u] + w < distance[v]:
return True
return False
def bellman_ford(
graph: list[dict[str, int]], vertex_count: int, edge_count: int, src: int
) -> list[float]:
"""
Returns shortest paths from a vertex src to all
other vertices.
>>> edges = [(2, 1, -10), (3, 2, 3), (0, 3, 5), (0, 1, 4)]
>>> g = [{"src": s, "dst": d, "weight": w} for s, d, w in edges]
>>> bellman_ford(g, 4, 4, 0)
[0.0, -2.0, 8.0, 5.0]
>>> g = [{"src": s, "dst": d, "weight": w} for s, d, w in edges + [(1, 3, 5)]]
>>> bellman_ford(g, 4, 5, 0)
Traceback (most recent call last):
...
Exception: Negative cycle found
"""
distance = [float("inf")] * vertex_count
distance[src] = 0.0
for _ in range(vertex_count - 1):
for j in range(edge_count):
u, v, w = (graph[j][k] for k in ["src", "dst", "weight"])
if distance[u] != float("inf") and distance[u] + w < distance[v]:
distance[v] = distance[u] + w
negative_cycle_exists = check_negative_cycle(graph, distance, edge_count)
if negative_cycle_exists:
raise Exception("Negative cycle found")
return distance
if __name__ == "__main__":
import doctest
doctest.testmod()
V = int(input("Enter number of vertices: ").strip())
E = int(input("Enter number of edges: ").strip())
graph: list[dict[str, int]] = [{} for _ in range(E)]
for i in range(E):
print("Edge ", i + 1)
src, dest, weight = (
int(x)
for x in input("Enter source, destination, weight: ").strip().split(" ")
)
graph[i] = {"src": src, "dst": dest, "weight": weight}
source = int(input("\nEnter shortest path source:").strip())
shortest_distance = bellman_ford(graph, V, E, source)
print_distance(shortest_distance, 0)
| -1 |
TheAlgorithms/Python | 9,576 | Upgrade to Python 3.12 | ### Describe your change:
Repeats #8777
Repeats #9306
* #8777
* #9306
**Disables** algorithms based on `qiskit` and `tensorflow` because those modules are not yet compatible with Python 3.12.
* https://github.com/tensorflow/tensorflow/releases
* https://github.com/Qiskit/qiskit/issues/10887
---
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
| cclauss | "2023-10-03T08:13:43Z" | "2023-10-03T09:17:11Z" | f964dcbf2ff7c70e4aca20532a38dfb02ce8a4c0 | 0f4e51245f33175b4fb311f633d3821210741bdd | Upgrade to Python 3.12. ### Describe your change:
Repeats #8777
Repeats #9306
* #8777
* #9306
**Disables** algorithms based on `qiskit` and `tensorflow` because those modules are not yet compatible with Python 3.12.
* https://github.com/tensorflow/tensorflow/releases
* https://github.com/Qiskit/qiskit/issues/10887
---
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
| -1 |
||
TheAlgorithms/Python | 9,576 | Upgrade to Python 3.12 | ### Describe your change:
Repeats #8777
Repeats #9306
* #8777
* #9306
**Disables** algorithms based on `qiskit` and `tensorflow` because those modules are not yet compatible with Python 3.12.
* https://github.com/tensorflow/tensorflow/releases
* https://github.com/Qiskit/qiskit/issues/10887
---
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
| cclauss | "2023-10-03T08:13:43Z" | "2023-10-03T09:17:11Z" | f964dcbf2ff7c70e4aca20532a38dfb02ce8a4c0 | 0f4e51245f33175b4fb311f633d3821210741bdd | Upgrade to Python 3.12. ### Describe your change:
Repeats #8777
Repeats #9306
* #8777
* #9306
**Disables** algorithms based on `qiskit` and `tensorflow` because those modules are not yet compatible with Python 3.12.
* https://github.com/tensorflow/tensorflow/releases
* https://github.com/Qiskit/qiskit/issues/10887
---
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
| """
https://cp-algorithms.com/string/z-function.html
Z-function or Z algorithm
Efficient algorithm for pattern occurrence in a string
Time Complexity: O(n) - where n is the length of the string
"""
def z_function(input_str: str) -> list[int]:
"""
For the given string this function computes value for each index,
which represents the maximal length substring starting from the index
and is the same as the prefix of the same size
e.x. for string 'abab' for second index value would be 2
For the value of the first element the algorithm always returns 0
>>> z_function("abracadabra")
[0, 0, 0, 1, 0, 1, 0, 4, 0, 0, 1]
>>> z_function("aaaa")
[0, 3, 2, 1]
>>> z_function("zxxzxxz")
[0, 0, 0, 4, 0, 0, 1]
"""
z_result = [0 for i in range(len(input_str))]
# initialize interval's left pointer and right pointer
left_pointer, right_pointer = 0, 0
for i in range(1, len(input_str)):
# case when current index is inside the interval
if i <= right_pointer:
min_edge = min(right_pointer - i + 1, z_result[i - left_pointer])
z_result[i] = min_edge
while go_next(i, z_result, input_str):
z_result[i] += 1
# if new index's result gives us more right interval,
# we've to update left_pointer and right_pointer
if i + z_result[i] - 1 > right_pointer:
left_pointer, right_pointer = i, i + z_result[i] - 1
return z_result
def go_next(i: int, z_result: list[int], s: str) -> bool:
"""
Check if we have to move forward to the next characters or not
"""
return i + z_result[i] < len(s) and s[z_result[i]] == s[i + z_result[i]]
def find_pattern(pattern: str, input_str: str) -> int:
"""
Example of using z-function for pattern occurrence
Given function returns the number of times 'pattern'
appears in 'input_str' as a substring
>>> find_pattern("abr", "abracadabra")
2
>>> find_pattern("a", "aaaa")
4
>>> find_pattern("xz", "zxxzxxz")
2
"""
answer = 0
# concatenate 'pattern' and 'input_str' and call z_function
# with concatenated string
z_result = z_function(pattern + input_str)
for val in z_result:
# if value is greater then length of the pattern string
# that means this index is starting position of substring
# which is equal to pattern string
if val >= len(pattern):
answer += 1
return answer
if __name__ == "__main__":
import doctest
doctest.testmod()
| """
https://cp-algorithms.com/string/z-function.html
Z-function or Z algorithm
Efficient algorithm for pattern occurrence in a string
Time Complexity: O(n) - where n is the length of the string
"""
def z_function(input_str: str) -> list[int]:
"""
For the given string this function computes value for each index,
which represents the maximal length substring starting from the index
and is the same as the prefix of the same size
e.x. for string 'abab' for second index value would be 2
For the value of the first element the algorithm always returns 0
>>> z_function("abracadabra")
[0, 0, 0, 1, 0, 1, 0, 4, 0, 0, 1]
>>> z_function("aaaa")
[0, 3, 2, 1]
>>> z_function("zxxzxxz")
[0, 0, 0, 4, 0, 0, 1]
"""
z_result = [0 for i in range(len(input_str))]
# initialize interval's left pointer and right pointer
left_pointer, right_pointer = 0, 0
for i in range(1, len(input_str)):
# case when current index is inside the interval
if i <= right_pointer:
min_edge = min(right_pointer - i + 1, z_result[i - left_pointer])
z_result[i] = min_edge
while go_next(i, z_result, input_str):
z_result[i] += 1
# if new index's result gives us more right interval,
# we've to update left_pointer and right_pointer
if i + z_result[i] - 1 > right_pointer:
left_pointer, right_pointer = i, i + z_result[i] - 1
return z_result
def go_next(i: int, z_result: list[int], s: str) -> bool:
"""
Check if we have to move forward to the next characters or not
"""
return i + z_result[i] < len(s) and s[z_result[i]] == s[i + z_result[i]]
def find_pattern(pattern: str, input_str: str) -> int:
"""
Example of using z-function for pattern occurrence
Given function returns the number of times 'pattern'
appears in 'input_str' as a substring
>>> find_pattern("abr", "abracadabra")
2
>>> find_pattern("a", "aaaa")
4
>>> find_pattern("xz", "zxxzxxz")
2
"""
answer = 0
# concatenate 'pattern' and 'input_str' and call z_function
# with concatenated string
z_result = z_function(pattern + input_str)
for val in z_result:
# if value is greater then length of the pattern string
# that means this index is starting position of substring
# which is equal to pattern string
if val >= len(pattern):
answer += 1
return answer
if __name__ == "__main__":
import doctest
doctest.testmod()
| -1 |
TheAlgorithms/Python | 9,576 | Upgrade to Python 3.12 | ### Describe your change:
Repeats #8777
Repeats #9306
* #8777
* #9306
**Disables** algorithms based on `qiskit` and `tensorflow` because those modules are not yet compatible with Python 3.12.
* https://github.com/tensorflow/tensorflow/releases
* https://github.com/Qiskit/qiskit/issues/10887
---
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
| cclauss | "2023-10-03T08:13:43Z" | "2023-10-03T09:17:11Z" | f964dcbf2ff7c70e4aca20532a38dfb02ce8a4c0 | 0f4e51245f33175b4fb311f633d3821210741bdd | Upgrade to Python 3.12. ### Describe your change:
Repeats #8777
Repeats #9306
* #8777
* #9306
**Disables** algorithms based on `qiskit` and `tensorflow` because those modules are not yet compatible with Python 3.12.
* https://github.com/tensorflow/tensorflow/releases
* https://github.com/Qiskit/qiskit/issues/10887
---
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
| tasks:
- init: pip3 install -r ./requirements.txt
| tasks:
- init: pip3 install -r ./requirements.txt
| -1 |
TheAlgorithms/Python | 9,576 | Upgrade to Python 3.12 | ### Describe your change:
Repeats #8777
Repeats #9306
* #8777
* #9306
**Disables** algorithms based on `qiskit` and `tensorflow` because those modules are not yet compatible with Python 3.12.
* https://github.com/tensorflow/tensorflow/releases
* https://github.com/Qiskit/qiskit/issues/10887
---
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
| cclauss | "2023-10-03T08:13:43Z" | "2023-10-03T09:17:11Z" | f964dcbf2ff7c70e4aca20532a38dfb02ce8a4c0 | 0f4e51245f33175b4fb311f633d3821210741bdd | Upgrade to Python 3.12. ### Describe your change:
Repeats #8777
Repeats #9306
* #8777
* #9306
**Disables** algorithms based on `qiskit` and `tensorflow` because those modules are not yet compatible with Python 3.12.
* https://github.com/tensorflow/tensorflow/releases
* https://github.com/Qiskit/qiskit/issues/10887
---
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
| """
https://en.wikipedia.org/wiki/N-gram
"""
def create_ngram(sentence: str, ngram_size: int) -> list[str]:
"""
Create ngrams from a sentence
>>> create_ngram("I am a sentence", 2)
['I ', ' a', 'am', 'm ', ' a', 'a ', ' s', 'se', 'en', 'nt', 'te', 'en', 'nc', 'ce']
>>> create_ngram("I am an NLPer", 2)
['I ', ' a', 'am', 'm ', ' a', 'an', 'n ', ' N', 'NL', 'LP', 'Pe', 'er']
>>> create_ngram("This is short", 50)
[]
"""
return [sentence[i : i + ngram_size] for i in range(len(sentence) - ngram_size + 1)]
if __name__ == "__main__":
from doctest import testmod
testmod()
| """
https://en.wikipedia.org/wiki/N-gram
"""
def create_ngram(sentence: str, ngram_size: int) -> list[str]:
"""
Create ngrams from a sentence
>>> create_ngram("I am a sentence", 2)
['I ', ' a', 'am', 'm ', ' a', 'a ', ' s', 'se', 'en', 'nt', 'te', 'en', 'nc', 'ce']
>>> create_ngram("I am an NLPer", 2)
['I ', ' a', 'am', 'm ', ' a', 'an', 'n ', ' N', 'NL', 'LP', 'Pe', 'er']
>>> create_ngram("This is short", 50)
[]
"""
return [sentence[i : i + ngram_size] for i in range(len(sentence) - ngram_size + 1)]
if __name__ == "__main__":
from doctest import testmod
testmod()
| -1 |
TheAlgorithms/Python | 9,576 | Upgrade to Python 3.12 | ### Describe your change:
Repeats #8777
Repeats #9306
* #8777
* #9306
**Disables** algorithms based on `qiskit` and `tensorflow` because those modules are not yet compatible with Python 3.12.
* https://github.com/tensorflow/tensorflow/releases
* https://github.com/Qiskit/qiskit/issues/10887
---
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
| cclauss | "2023-10-03T08:13:43Z" | "2023-10-03T09:17:11Z" | f964dcbf2ff7c70e4aca20532a38dfb02ce8a4c0 | 0f4e51245f33175b4fb311f633d3821210741bdd | Upgrade to Python 3.12. ### Describe your change:
Repeats #8777
Repeats #9306
* #8777
* #9306
**Disables** algorithms based on `qiskit` and `tensorflow` because those modules are not yet compatible with Python 3.12.
* https://github.com/tensorflow/tensorflow/releases
* https://github.com/Qiskit/qiskit/issues/10887
---
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
| """
A NAND Gate is a logic gate in boolean algebra which results to 0 (False) if both
the inputs are 1, and 1 (True) otherwise. It's similar to adding
a NOT gate along with an AND gate.
Following is the truth table of a NAND Gate:
------------------------------
| Input 1 | Input 2 | Output |
------------------------------
| 0 | 0 | 1 |
| 0 | 1 | 1 |
| 1 | 0 | 1 |
| 1 | 1 | 0 |
------------------------------
Refer - https://www.geeksforgeeks.org/logic-gates-in-python/
"""
def nand_gate(input_1: int, input_2: int) -> int:
"""
Calculate NAND of the input values
>>> nand_gate(0, 0)
1
>>> nand_gate(0, 1)
1
>>> nand_gate(1, 0)
1
>>> nand_gate(1, 1)
0
"""
return int((input_1, input_2).count(0) != 0)
def test_nand_gate() -> None:
"""
Tests the nand_gate function
"""
assert nand_gate(0, 0) == 1
assert nand_gate(0, 1) == 1
assert nand_gate(1, 0) == 1
assert nand_gate(1, 1) == 0
if __name__ == "__main__":
print(nand_gate(0, 0))
print(nand_gate(0, 1))
print(nand_gate(1, 0))
print(nand_gate(1, 1))
| """
A NAND Gate is a logic gate in boolean algebra which results to 0 (False) if both
the inputs are 1, and 1 (True) otherwise. It's similar to adding
a NOT gate along with an AND gate.
Following is the truth table of a NAND Gate:
------------------------------
| Input 1 | Input 2 | Output |
------------------------------
| 0 | 0 | 1 |
| 0 | 1 | 1 |
| 1 | 0 | 1 |
| 1 | 1 | 0 |
------------------------------
Refer - https://www.geeksforgeeks.org/logic-gates-in-python/
"""
def nand_gate(input_1: int, input_2: int) -> int:
"""
Calculate NAND of the input values
>>> nand_gate(0, 0)
1
>>> nand_gate(0, 1)
1
>>> nand_gate(1, 0)
1
>>> nand_gate(1, 1)
0
"""
return int((input_1, input_2).count(0) != 0)
def test_nand_gate() -> None:
"""
Tests the nand_gate function
"""
assert nand_gate(0, 0) == 1
assert nand_gate(0, 1) == 1
assert nand_gate(1, 0) == 1
assert nand_gate(1, 1) == 0
if __name__ == "__main__":
print(nand_gate(0, 0))
print(nand_gate(0, 1))
print(nand_gate(1, 0))
print(nand_gate(1, 1))
| -1 |
TheAlgorithms/Python | 9,576 | Upgrade to Python 3.12 | ### Describe your change:
Repeats #8777
Repeats #9306
* #8777
* #9306
**Disables** algorithms based on `qiskit` and `tensorflow` because those modules are not yet compatible with Python 3.12.
* https://github.com/tensorflow/tensorflow/releases
* https://github.com/Qiskit/qiskit/issues/10887
---
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
| cclauss | "2023-10-03T08:13:43Z" | "2023-10-03T09:17:11Z" | f964dcbf2ff7c70e4aca20532a38dfb02ce8a4c0 | 0f4e51245f33175b4fb311f633d3821210741bdd | Upgrade to Python 3.12. ### Describe your change:
Repeats #8777
Repeats #9306
* #8777
* #9306
**Disables** algorithms based on `qiskit` and `tensorflow` because those modules are not yet compatible with Python 3.12.
* https://github.com/tensorflow/tensorflow/releases
* https://github.com/Qiskit/qiskit/issues/10887
---
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
| -1 |
||
TheAlgorithms/Python | 9,576 | Upgrade to Python 3.12 | ### Describe your change:
Repeats #8777
Repeats #9306
* #8777
* #9306
**Disables** algorithms based on `qiskit` and `tensorflow` because those modules are not yet compatible with Python 3.12.
* https://github.com/tensorflow/tensorflow/releases
* https://github.com/Qiskit/qiskit/issues/10887
---
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
| cclauss | "2023-10-03T08:13:43Z" | "2023-10-03T09:17:11Z" | f964dcbf2ff7c70e4aca20532a38dfb02ce8a4c0 | 0f4e51245f33175b4fb311f633d3821210741bdd | Upgrade to Python 3.12. ### Describe your change:
Repeats #8777
Repeats #9306
* #8777
* #9306
**Disables** algorithms based on `qiskit` and `tensorflow` because those modules are not yet compatible with Python 3.12.
* https://github.com/tensorflow/tensorflow/releases
* https://github.com/Qiskit/qiskit/issues/10887
---
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
| """
Minimax helps to achieve maximum score in a game by checking all possible moves
depth is current depth in game tree.
nodeIndex is index of current node in scores[].
if move is of maximizer return true else false
leaves of game tree is stored in scores[]
height is maximum height of Game tree
"""
from __future__ import annotations
import math
def minimax(
depth: int, node_index: int, is_max: bool, scores: list[int], height: float
) -> int:
"""
>>> import math
>>> scores = [90, 23, 6, 33, 21, 65, 123, 34423]
>>> height = math.log(len(scores), 2)
>>> minimax(0, 0, True, scores, height)
65
>>> minimax(-1, 0, True, scores, height)
Traceback (most recent call last):
...
ValueError: Depth cannot be less than 0
>>> minimax(0, 0, True, [], 2)
Traceback (most recent call last):
...
ValueError: Scores cannot be empty
>>> scores = [3, 5, 2, 9, 12, 5, 23, 23]
>>> height = math.log(len(scores), 2)
>>> minimax(0, 0, True, scores, height)
12
"""
if depth < 0:
raise ValueError("Depth cannot be less than 0")
if len(scores) == 0:
raise ValueError("Scores cannot be empty")
if depth == height:
return scores[node_index]
if is_max:
return max(
minimax(depth + 1, node_index * 2, False, scores, height),
minimax(depth + 1, node_index * 2 + 1, False, scores, height),
)
return min(
minimax(depth + 1, node_index * 2, True, scores, height),
minimax(depth + 1, node_index * 2 + 1, True, scores, height),
)
def main() -> None:
scores = [90, 23, 6, 33, 21, 65, 123, 34423]
height = math.log(len(scores), 2)
print("Optimal value : ", end="")
print(minimax(0, 0, True, scores, height))
if __name__ == "__main__":
import doctest
doctest.testmod()
main()
| """
Minimax helps to achieve maximum score in a game by checking all possible moves
depth is current depth in game tree.
nodeIndex is index of current node in scores[].
if move is of maximizer return true else false
leaves of game tree is stored in scores[]
height is maximum height of Game tree
"""
from __future__ import annotations
import math
def minimax(
depth: int, node_index: int, is_max: bool, scores: list[int], height: float
) -> int:
"""
>>> import math
>>> scores = [90, 23, 6, 33, 21, 65, 123, 34423]
>>> height = math.log(len(scores), 2)
>>> minimax(0, 0, True, scores, height)
65
>>> minimax(-1, 0, True, scores, height)
Traceback (most recent call last):
...
ValueError: Depth cannot be less than 0
>>> minimax(0, 0, True, [], 2)
Traceback (most recent call last):
...
ValueError: Scores cannot be empty
>>> scores = [3, 5, 2, 9, 12, 5, 23, 23]
>>> height = math.log(len(scores), 2)
>>> minimax(0, 0, True, scores, height)
12
"""
if depth < 0:
raise ValueError("Depth cannot be less than 0")
if len(scores) == 0:
raise ValueError("Scores cannot be empty")
if depth == height:
return scores[node_index]
if is_max:
return max(
minimax(depth + 1, node_index * 2, False, scores, height),
minimax(depth + 1, node_index * 2 + 1, False, scores, height),
)
return min(
minimax(depth + 1, node_index * 2, True, scores, height),
minimax(depth + 1, node_index * 2 + 1, True, scores, height),
)
def main() -> None:
scores = [90, 23, 6, 33, 21, 65, 123, 34423]
height = math.log(len(scores), 2)
print("Optimal value : ", end="")
print(minimax(0, 0, True, scores, height))
if __name__ == "__main__":
import doctest
doctest.testmod()
main()
| -1 |
TheAlgorithms/Python | 9,576 | Upgrade to Python 3.12 | ### Describe your change:
Repeats #8777
Repeats #9306
* #8777
* #9306
**Disables** algorithms based on `qiskit` and `tensorflow` because those modules are not yet compatible with Python 3.12.
* https://github.com/tensorflow/tensorflow/releases
* https://github.com/Qiskit/qiskit/issues/10887
---
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
| cclauss | "2023-10-03T08:13:43Z" | "2023-10-03T09:17:11Z" | f964dcbf2ff7c70e4aca20532a38dfb02ce8a4c0 | 0f4e51245f33175b4fb311f633d3821210741bdd | Upgrade to Python 3.12. ### Describe your change:
Repeats #8777
Repeats #9306
* #8777
* #9306
**Disables** algorithms based on `qiskit` and `tensorflow` because those modules are not yet compatible with Python 3.12.
* https://github.com/tensorflow/tensorflow/releases
* https://github.com/Qiskit/qiskit/issues/10887
---
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
| """
Arithmetic mean
Reference: https://en.wikipedia.org/wiki/Arithmetic_mean
Arithmetic series
Reference: https://en.wikipedia.org/wiki/Arithmetic_series
(The URL above will redirect you to arithmetic progression)
"""
def is_arithmetic_series(series: list) -> bool:
"""
checking whether the input series is arithmetic series or not
>>> is_arithmetic_series([2, 4, 6])
True
>>> is_arithmetic_series([3, 6, 12, 24])
False
>>> is_arithmetic_series([1, 2, 3])
True
>>> is_arithmetic_series(4)
Traceback (most recent call last):
...
ValueError: Input series is not valid, valid series - [2, 4, 6]
>>> is_arithmetic_series([])
Traceback (most recent call last):
...
ValueError: Input list must be a non empty list
"""
if not isinstance(series, list):
raise ValueError("Input series is not valid, valid series - [2, 4, 6]")
if len(series) == 0:
raise ValueError("Input list must be a non empty list")
if len(series) == 1:
return True
common_diff = series[1] - series[0]
for index in range(len(series) - 1):
if series[index + 1] - series[index] != common_diff:
return False
return True
def arithmetic_mean(series: list) -> float:
"""
return the arithmetic mean of series
>>> arithmetic_mean([2, 4, 6])
4.0
>>> arithmetic_mean([3, 6, 9, 12])
7.5
>>> arithmetic_mean(4)
Traceback (most recent call last):
...
ValueError: Input series is not valid, valid series - [2, 4, 6]
>>> arithmetic_mean([4, 8, 1])
4.333333333333333
>>> arithmetic_mean([1, 2, 3])
2.0
>>> arithmetic_mean([])
Traceback (most recent call last):
...
ValueError: Input list must be a non empty list
"""
if not isinstance(series, list):
raise ValueError("Input series is not valid, valid series - [2, 4, 6]")
if len(series) == 0:
raise ValueError("Input list must be a non empty list")
answer = 0
for val in series:
answer += val
return answer / len(series)
if __name__ == "__main__":
import doctest
doctest.testmod()
| """
Arithmetic mean
Reference: https://en.wikipedia.org/wiki/Arithmetic_mean
Arithmetic series
Reference: https://en.wikipedia.org/wiki/Arithmetic_series
(The URL above will redirect you to arithmetic progression)
"""
def is_arithmetic_series(series: list) -> bool:
"""
checking whether the input series is arithmetic series or not
>>> is_arithmetic_series([2, 4, 6])
True
>>> is_arithmetic_series([3, 6, 12, 24])
False
>>> is_arithmetic_series([1, 2, 3])
True
>>> is_arithmetic_series(4)
Traceback (most recent call last):
...
ValueError: Input series is not valid, valid series - [2, 4, 6]
>>> is_arithmetic_series([])
Traceback (most recent call last):
...
ValueError: Input list must be a non empty list
"""
if not isinstance(series, list):
raise ValueError("Input series is not valid, valid series - [2, 4, 6]")
if len(series) == 0:
raise ValueError("Input list must be a non empty list")
if len(series) == 1:
return True
common_diff = series[1] - series[0]
for index in range(len(series) - 1):
if series[index + 1] - series[index] != common_diff:
return False
return True
def arithmetic_mean(series: list) -> float:
"""
return the arithmetic mean of series
>>> arithmetic_mean([2, 4, 6])
4.0
>>> arithmetic_mean([3, 6, 9, 12])
7.5
>>> arithmetic_mean(4)
Traceback (most recent call last):
...
ValueError: Input series is not valid, valid series - [2, 4, 6]
>>> arithmetic_mean([4, 8, 1])
4.333333333333333
>>> arithmetic_mean([1, 2, 3])
2.0
>>> arithmetic_mean([])
Traceback (most recent call last):
...
ValueError: Input list must be a non empty list
"""
if not isinstance(series, list):
raise ValueError("Input series is not valid, valid series - [2, 4, 6]")
if len(series) == 0:
raise ValueError("Input list must be a non empty list")
answer = 0
for val in series:
answer += val
return answer / len(series)
if __name__ == "__main__":
import doctest
doctest.testmod()
| -1 |
TheAlgorithms/Python | 9,576 | Upgrade to Python 3.12 | ### Describe your change:
Repeats #8777
Repeats #9306
* #8777
* #9306
**Disables** algorithms based on `qiskit` and `tensorflow` because those modules are not yet compatible with Python 3.12.
* https://github.com/tensorflow/tensorflow/releases
* https://github.com/Qiskit/qiskit/issues/10887
---
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
| cclauss | "2023-10-03T08:13:43Z" | "2023-10-03T09:17:11Z" | f964dcbf2ff7c70e4aca20532a38dfb02ce8a4c0 | 0f4e51245f33175b4fb311f633d3821210741bdd | Upgrade to Python 3.12. ### Describe your change:
Repeats #8777
Repeats #9306
* #8777
* #9306
**Disables** algorithms based on `qiskit` and `tensorflow` because those modules are not yet compatible with Python 3.12.
* https://github.com/tensorflow/tensorflow/releases
* https://github.com/Qiskit/qiskit/issues/10887
---
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
| """
The Mandelbrot set is the set of complex numbers "c" for which the series
"z_(n+1) = z_n * z_n + c" does not diverge, i.e. remains bounded. Thus, a
complex number "c" is a member of the Mandelbrot set if, when starting with
"z_0 = 0" and applying the iteration repeatedly, the absolute value of
"z_n" remains bounded for all "n > 0". Complex numbers can be written as
"a + b*i": "a" is the real component, usually drawn on the x-axis, and "b*i"
is the imaginary component, usually drawn on the y-axis. Most visualizations
of the Mandelbrot set use a color-coding to indicate after how many steps in
the series the numbers outside the set diverge. Images of the Mandelbrot set
exhibit an elaborate and infinitely complicated boundary that reveals
progressively ever-finer recursive detail at increasing magnifications, making
the boundary of the Mandelbrot set a fractal curve.
(description adapted from https://en.wikipedia.org/wiki/Mandelbrot_set )
(see also https://en.wikipedia.org/wiki/Plotting_algorithms_for_the_Mandelbrot_set )
"""
import colorsys
from PIL import Image # type: ignore
def get_distance(x: float, y: float, max_step: int) -> float:
"""
Return the relative distance (= step/max_step) after which the complex number
constituted by this x-y-pair diverges. Members of the Mandelbrot set do not
diverge so their distance is 1.
>>> get_distance(0, 0, 50)
1.0
>>> get_distance(0.5, 0.5, 50)
0.061224489795918366
>>> get_distance(2, 0, 50)
0.0
"""
a = x
b = y
for step in range(max_step): # noqa: B007
a_new = a * a - b * b + x
b = 2 * a * b + y
a = a_new
# divergence happens for all complex number with an absolute value
# greater than 4
if a * a + b * b > 4:
break
return step / (max_step - 1)
def get_black_and_white_rgb(distance: float) -> tuple:
"""
Black&white color-coding that ignores the relative distance. The Mandelbrot
set is black, everything else is white.
>>> get_black_and_white_rgb(0)
(255, 255, 255)
>>> get_black_and_white_rgb(0.5)
(255, 255, 255)
>>> get_black_and_white_rgb(1)
(0, 0, 0)
"""
if distance == 1:
return (0, 0, 0)
else:
return (255, 255, 255)
def get_color_coded_rgb(distance: float) -> tuple:
"""
Color-coding taking the relative distance into account. The Mandelbrot set
is black.
>>> get_color_coded_rgb(0)
(255, 0, 0)
>>> get_color_coded_rgb(0.5)
(0, 255, 255)
>>> get_color_coded_rgb(1)
(0, 0, 0)
"""
if distance == 1:
return (0, 0, 0)
else:
return tuple(round(i * 255) for i in colorsys.hsv_to_rgb(distance, 1, 1))
def get_image(
image_width: int = 800,
image_height: int = 600,
figure_center_x: float = -0.6,
figure_center_y: float = 0,
figure_width: float = 3.2,
max_step: int = 50,
use_distance_color_coding: bool = True,
) -> Image.Image:
"""
Function to generate the image of the Mandelbrot set. Two types of coordinates
are used: image-coordinates that refer to the pixels and figure-coordinates
that refer to the complex numbers inside and outside the Mandelbrot set. The
figure-coordinates in the arguments of this function determine which section
of the Mandelbrot set is viewed. The main area of the Mandelbrot set is
roughly between "-1.5 < x < 0.5" and "-1 < y < 1" in the figure-coordinates.
Commenting out tests that slow down pytest...
# 13.35s call fractals/mandelbrot.py::mandelbrot.get_image
# >>> get_image().load()[0,0]
(255, 0, 0)
# >>> get_image(use_distance_color_coding = False).load()[0,0]
(255, 255, 255)
"""
img = Image.new("RGB", (image_width, image_height))
pixels = img.load()
# loop through the image-coordinates
for image_x in range(image_width):
for image_y in range(image_height):
# determine the figure-coordinates based on the image-coordinates
figure_height = figure_width / image_width * image_height
figure_x = figure_center_x + (image_x / image_width - 0.5) * figure_width
figure_y = figure_center_y + (image_y / image_height - 0.5) * figure_height
distance = get_distance(figure_x, figure_y, max_step)
# color the corresponding pixel based on the selected coloring-function
if use_distance_color_coding:
pixels[image_x, image_y] = get_color_coded_rgb(distance)
else:
pixels[image_x, image_y] = get_black_and_white_rgb(distance)
return img
if __name__ == "__main__":
import doctest
doctest.testmod()
# colored version, full figure
img = get_image()
# uncomment for colored version, different section, zoomed in
# img = get_image(figure_center_x = -0.6, figure_center_y = -0.4,
# figure_width = 0.8)
# uncomment for black and white version, full figure
# img = get_image(use_distance_color_coding = False)
# uncomment to save the image
# img.save("mandelbrot.png")
img.show()
| """
The Mandelbrot set is the set of complex numbers "c" for which the series
"z_(n+1) = z_n * z_n + c" does not diverge, i.e. remains bounded. Thus, a
complex number "c" is a member of the Mandelbrot set if, when starting with
"z_0 = 0" and applying the iteration repeatedly, the absolute value of
"z_n" remains bounded for all "n > 0". Complex numbers can be written as
"a + b*i": "a" is the real component, usually drawn on the x-axis, and "b*i"
is the imaginary component, usually drawn on the y-axis. Most visualizations
of the Mandelbrot set use a color-coding to indicate after how many steps in
the series the numbers outside the set diverge. Images of the Mandelbrot set
exhibit an elaborate and infinitely complicated boundary that reveals
progressively ever-finer recursive detail at increasing magnifications, making
the boundary of the Mandelbrot set a fractal curve.
(description adapted from https://en.wikipedia.org/wiki/Mandelbrot_set )
(see also https://en.wikipedia.org/wiki/Plotting_algorithms_for_the_Mandelbrot_set )
"""
import colorsys
from PIL import Image # type: ignore
def get_distance(x: float, y: float, max_step: int) -> float:
"""
Return the relative distance (= step/max_step) after which the complex number
constituted by this x-y-pair diverges. Members of the Mandelbrot set do not
diverge so their distance is 1.
>>> get_distance(0, 0, 50)
1.0
>>> get_distance(0.5, 0.5, 50)
0.061224489795918366
>>> get_distance(2, 0, 50)
0.0
"""
a = x
b = y
for step in range(max_step): # noqa: B007
a_new = a * a - b * b + x
b = 2 * a * b + y
a = a_new
# divergence happens for all complex number with an absolute value
# greater than 4
if a * a + b * b > 4:
break
return step / (max_step - 1)
def get_black_and_white_rgb(distance: float) -> tuple:
"""
Black&white color-coding that ignores the relative distance. The Mandelbrot
set is black, everything else is white.
>>> get_black_and_white_rgb(0)
(255, 255, 255)
>>> get_black_and_white_rgb(0.5)
(255, 255, 255)
>>> get_black_and_white_rgb(1)
(0, 0, 0)
"""
if distance == 1:
return (0, 0, 0)
else:
return (255, 255, 255)
def get_color_coded_rgb(distance: float) -> tuple:
"""
Color-coding taking the relative distance into account. The Mandelbrot set
is black.
>>> get_color_coded_rgb(0)
(255, 0, 0)
>>> get_color_coded_rgb(0.5)
(0, 255, 255)
>>> get_color_coded_rgb(1)
(0, 0, 0)
"""
if distance == 1:
return (0, 0, 0)
else:
return tuple(round(i * 255) for i in colorsys.hsv_to_rgb(distance, 1, 1))
def get_image(
image_width: int = 800,
image_height: int = 600,
figure_center_x: float = -0.6,
figure_center_y: float = 0,
figure_width: float = 3.2,
max_step: int = 50,
use_distance_color_coding: bool = True,
) -> Image.Image:
"""
Function to generate the image of the Mandelbrot set. Two types of coordinates
are used: image-coordinates that refer to the pixels and figure-coordinates
that refer to the complex numbers inside and outside the Mandelbrot set. The
figure-coordinates in the arguments of this function determine which section
of the Mandelbrot set is viewed. The main area of the Mandelbrot set is
roughly between "-1.5 < x < 0.5" and "-1 < y < 1" in the figure-coordinates.
Commenting out tests that slow down pytest...
# 13.35s call fractals/mandelbrot.py::mandelbrot.get_image
# >>> get_image().load()[0,0]
(255, 0, 0)
# >>> get_image(use_distance_color_coding = False).load()[0,0]
(255, 255, 255)
"""
img = Image.new("RGB", (image_width, image_height))
pixels = img.load()
# loop through the image-coordinates
for image_x in range(image_width):
for image_y in range(image_height):
# determine the figure-coordinates based on the image-coordinates
figure_height = figure_width / image_width * image_height
figure_x = figure_center_x + (image_x / image_width - 0.5) * figure_width
figure_y = figure_center_y + (image_y / image_height - 0.5) * figure_height
distance = get_distance(figure_x, figure_y, max_step)
# color the corresponding pixel based on the selected coloring-function
if use_distance_color_coding:
pixels[image_x, image_y] = get_color_coded_rgb(distance)
else:
pixels[image_x, image_y] = get_black_and_white_rgb(distance)
return img
if __name__ == "__main__":
import doctest
doctest.testmod()
# colored version, full figure
img = get_image()
# uncomment for colored version, different section, zoomed in
# img = get_image(figure_center_x = -0.6, figure_center_y = -0.4,
# figure_width = 0.8)
# uncomment for black and white version, full figure
# img = get_image(use_distance_color_coding = False)
# uncomment to save the image
# img.save("mandelbrot.png")
img.show()
| -1 |
TheAlgorithms/Python | 9,576 | Upgrade to Python 3.12 | ### Describe your change:
Repeats #8777
Repeats #9306
* #8777
* #9306
**Disables** algorithms based on `qiskit` and `tensorflow` because those modules are not yet compatible with Python 3.12.
* https://github.com/tensorflow/tensorflow/releases
* https://github.com/Qiskit/qiskit/issues/10887
---
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
| cclauss | "2023-10-03T08:13:43Z" | "2023-10-03T09:17:11Z" | f964dcbf2ff7c70e4aca20532a38dfb02ce8a4c0 | 0f4e51245f33175b4fb311f633d3821210741bdd | Upgrade to Python 3.12. ### Describe your change:
Repeats #8777
Repeats #9306
* #8777
* #9306
**Disables** algorithms based on `qiskit` and `tensorflow` because those modules are not yet compatible with Python 3.12.
* https://github.com/tensorflow/tensorflow/releases
* https://github.com/Qiskit/qiskit/issues/10887
---
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
| The Project Gutenberg eBook, Prehistoric Men, by Robert J. (Robert John)
Braidwood, Illustrated by Susan T. Richert
This eBook is for the use of anyone anywhere in the United States and most
other parts of the world at no cost and with almost no restrictions
whatsoever. You may copy it, give it away or re-use it under the terms of
the Project Gutenberg License included with this eBook or online at
www.gutenberg.org. If you are not located in the United States, you'll have
to check the laws of the country where you are located before using this ebook.
Title: Prehistoric Men
Author: Robert J. (Robert John) Braidwood
Release Date: July 28, 2016 [eBook #52664]
Language: English
Character set encoding: UTF-8
***START OF THE PROJECT GUTENBERG EBOOK PREHISTORIC MEN***
E-text prepared by Stephen Hutcheson, Dave Morgan, Charlie Howard, and the
Online Distributed Proofreading Team (http://www.pgdp.net)
Note: Project Gutenberg also has an HTML version of this
file which includes the original illustrations.
See 52664-h.htm or 52664-h.zip:
(http://www.gutenberg.org/files/52664/52664-h/52664-h.htm)
or
(http://www.gutenberg.org/files/52664/52664-h.zip)
Transcriber's note:
Some characters might not display in this UTF-8 text
version. If so, the reader should consult the HTML
version referred to above. One example of this might
occur in the second paragraph under "Choppers and
Adze-like Tools", page 46, which contains the phrase
an adze cutting edge is ? shaped. The symbol before
shaped looks like a sharply-italicized sans-serif L.
Devices that cannot display that symbol may substitute
a question mark, a square, or other symbol.
PREHISTORIC MEN
by
ROBERT J. BRAIDWOOD
Research Associate, Old World Prehistory
Professor
Oriental Institute and Department of Anthropology
University of Chicago
Drawings by Susan T. Richert
[Illustration]
Chicago Natural History Museum
Popular Series
Anthropology, Number 37
Third Edition Issued in Co-operation with
The Oriental Institute, The University of Chicago
Edited by Lillian A. Ross
Printed in the United States of America
by Chicago Natural History Museum Press
Copyright 1948, 1951, and 1957 by Chicago Natural History Museum
First edition 1948
Second edition 1951
Third edition 1957
Fourth edition 1959
Preface
[Illustration]
Like the writing of most professional archeologists, mine has been
confined to so-called learned papers. Good, bad, or indifferent, these
papers were in a jargon that only my colleagues and a few advanced
students could understand. Hence, when I was asked to do this little
book, I soon found it extremely difficult to say what I meant in simple
fashion. The style is new to me, but I hope the reader will not find it
forced or pedantic; at least I have done my very best to tell the story
simply and clearly.
Many friends have aided in the preparation of the book. The whimsical
charm of Miss Susan Richerts illustrations add enormously to the
spirit I wanted. She gave freely of her own time on the drawings and
in planning the book with me. My colleagues at the University of
Chicago, especially Professor Wilton M. Krogman (now of the University
of Pennsylvania), and also Mrs. Linda Braidwood, Associate of the
Oriental Institute, and Professors Fay-Cooper Cole and Sol Tax, of
the Department of Anthropology, gave me counsel in matters bearing on
their special fields, and the Department of Anthropology bore some of
the expense of the illustrations. From Mrs. Irma Hunter and Mr. Arnold
Maremont, who are not archeologists at all and have only an intelligent
laymans notion of archeology, I had sound advice on how best to tell
the story. I am deeply indebted to all these friends.
While I was preparing the second edition, I had the great fortune
to be able to rework the third chapter with Professor Sherwood L.
Washburn, now of the Department of Anthropology of the University of
California, and the fourth, fifth, and sixth chapters with Professor
Hallum L. Movius, Jr., of the Peabody Museum, Harvard University. The
book has gained greatly in accuracy thereby. In matters of dating,
Professor Movius and the indications of Professor W. F. Libbys Carbon
14 chronology project have both encouraged me to choose the lowest
dates now current for the events of the Pleistocene Ice Age. There is
still no certain way of fixing a direct chronology for most of the
Pleistocene, but Professor Libbys method appears very promising for
its end range and for proto-historic dates. In any case, this book
names periods, and new dates may be written in against mine, if new
and better dating systems appear.
I wish to thank Dr. Clifford C. Gregg, Director of Chicago Natural
History Museum, for the opportunity to publish this book. My old
friend, Dr. Paul S. Martin, Chief Curator in the Department of
Anthropology, asked me to undertake the job and inspired me to complete
it. I am also indebted to Miss Lillian A. Ross, Associate Editor of
Scientific Publications, and to Mr. George I. Quimby, Curator of
Exhibits in Anthropology, for all the time they have given me in
getting the manuscript into proper shape.
ROBERT J. BRAIDWOOD
_June 15, 1950_
Preface to the Third Edition
In preparing the enlarged third edition, many of the above mentioned
friends have again helped me. I have picked the brains of Professor F.
Clark Howell of the Department of Anthropology of the University of
Chicago in reworking the earlier chapters, and he was very patient in
the matter, which I sincerely appreciate.
All of Mrs. Susan Richert Allens original drawings appear, but a few
necessary corrections have been made in some of the charts and some new
drawings have been added by Mr. John Pfiffner, Staff Artist, Chicago
Natural History Museum.
ROBERT J. BRAIDWOOD
_March 1, 1959_
Contents
PAGE
How We Learn about Prehistoric Men 7
The Changing World in Which Prehistoric Men Lived 17
Prehistoric Men Themselves 22
Cultural Beginnings 38
More Evidence of Culture 56
Early Moderns 70
End and Prelude 92
The First Revolution 121
The Conquest of Civilization 144
End of Prehistory 162
Summary 176
List of Books 180
Index 184
HOW WE LEARN about Prehistoric Men
[Illustration]
Prehistory means the time before written history began. Actually, more
than 99 per cent of mans story is prehistory. Man is at least half a
million years old, but he did not begin to write history (or to write
anything) until about 5,000 years ago.
The men who lived in prehistoric times left us no history books, but
they did unintentionally leave a record of their presence and their way
of life. This record is studied and interpreted by different kinds of
scientists.
SCIENTISTS WHO FIND OUT ABOUT PREHISTORIC MEN
The scientists who study the bones and teeth and any other parts
they find of the bodies of prehistoric men, are called _physical
anthropologists_. Physical anthropologists are trained, much like
doctors, to know all about the human body. They study living people,
too; they know more about the biological facts of human races than
anybody else. If the police find a badly decayed body in a trunk,
they ask a physical anthropologist to tell them what the person
originally looked like. The physical anthropologists who specialize in
prehistoric men work with fossils, so they are sometimes called _human
paleontologists_.
ARCHEOLOGISTS
There is a kind of scientist who studies the things that prehistoric
men made and did. Such a scientist is called an _archeologist_. It is
the archeologists business to look for the stone and metal tools, the
pottery, the graves, and the caves or huts of the men who lived before
history began.
But there is more to archeology than just looking for things. In
Professor V. Gordon Childes words, archeology furnishes a sort of
history of human activity, provided always that the actions have
produced concrete results and left recognizable material traces. You
will see that there are at least three points in what Childe says:
1. The archeologists have to find the traces of things left behind by
ancient man, and
2. Only a few objects may be found, for most of these were probably
too soft or too breakable to last through the years. However,
3. The archeologist must use whatever he can find to tell a story--to
make a sort of history--from the objects and living-places and
graves that have escaped destruction.
What I mean is this: Let us say you are walking through a dump yard,
and you find a rusty old spark plug. If you want to think about what
the spark plug means, you quickly remember that it is a part of an
automobile motor. This tells you something about the man who threw
the spark plug on the dump. He either had an automobile, or he knew
or lived near someone who did. He cant have lived so very long ago,
youll remember, because spark plugs and automobiles are only about
sixty years old.
When you think about the old spark plug in this way you have
just been making the beginnings of what we call an archeological
_interpretation_; you have been making the spark plug tell a story.
It is the same way with the man-made things we archeologists find
and put in museums. Usually, only a few of these objects are pretty
to look at; but each of them has some sort of story to tell. Making
the interpretation of his finds is the most important part of the
archeologists job. It is the way he gets at the sort of history of
human activity which is expected of archeology.
SOME OTHER SCIENTISTS
There are many other scientists who help the archeologist and the
physical anthropologist find out about prehistoric men. The geologists
help us tell the age of the rocks or caves or gravel beds in which
human bones or man-made objects are found. There are other scientists
with names which all begin with paleo (the Greek word for old). The
_paleontologists_ study fossil animals. There are also, for example,
such scientists as _paleobotanists_ and _paleoclimatologists_, who
study ancient plants and climates. These scientists help us to know
the kinds of animals and plants that were living in prehistoric times
and so could be used for food by ancient man; what the weather was
like; and whether there were glaciers. Also, when I tell you that
prehistoric men did not appear until long after the great dinosaurs had
disappeared, I go on the say-so of the paleontologists. They know that
fossils of men and of dinosaurs are not found in the same geological
period. The dinosaur fossils come in early periods, the fossils of men
much later.
Since World War II even the atomic scientists have been helping the
archeologists. By testing the amount of radioactivity left in charcoal,
wood, or other vegetable matter obtained from archeological sites, they
have been able to date the sites. Shell has been used also, and even
the hair of Egyptian mummies. The dates of geological and climatic
events have also been discovered. Some of this work has been done from
drillings taken from the bottom of the sea.
This dating by radioactivity has considerably shortened the dates which
the archeologists used to give. If you find that some of the dates
I give here are more recent than the dates you see in other books
on prehistory, it is because I am using one of the new lower dating
systems.
[Illustration: RADIOCARBON CHART
The rate of disappearance of radioactivity as time passes.[1]]
[1] It is important that the limitations of the radioactive carbon
dating system be held in mind. As the statistics involved in
the system are used, there are two chances in three that the
date of the sample falls within the range given as plus or
minus an added number of years. For example, the date for the
Jarmo village (see chart), given as 6750 200 B.C., really
means that there are only two chances in three that the real
date of the charcoal sampled fell between 6950 and 6550 B.C.
We have also begun to suspect that there are ways in which the
samples themselves may have become contaminated, either on
the early or on the late side. We now tend to be suspicious of
single radioactive carbon determinations, or of determinations
from one site alone. But as a fabric of consistent
determinations for several or more sites of one archeological
period, we gain confidence in the dates.
HOW THE SCIENTISTS FIND OUT
So far, this chapter has been mainly about the people who find out
about prehistoric men. We also need a word about _how_ they find out.
All our finds came by accident until about a hundred years ago. Men
digging wells, or digging in caves for fertilizer, often turned up
ancient swords or pots or stone arrowheads. People also found some odd
pieces of stone that didnt look like natural forms, but they also
didnt look like any known tool. As a result, the people who found them
gave them queer names; for example, thunderbolts. The people thought
the strange stones came to earth as bolts of lightning. We know now
that these strange stones were prehistoric stone tools.
Many important finds still come to us by accident. In 1935, a British
dentist, A. T. Marston, found the first of two fragments of a very
important fossil human skull, in a gravel pit at Swanscombe, on the
River Thames, England. He had to wait nine months, until the face of
the gravel pit had been dug eight yards farther back, before the second
fragment appeared. They fitted! Then, twenty years later, still another
piece appeared. In 1928 workmen who were blasting out rock for the
breakwater in the port of Haifa began to notice flint tools. Thus the
story of cave men on Mount Carmel, in Palestine, began to be known.
Planned archeological digging is only about a century old. Even before
this, however, a few men realized the significance of objects they dug
from the ground; one of these early archeologists was our own Thomas
Jefferson. The first real mound-digger was a German grocers clerk,
Heinrich Schliemann. Schliemann made a fortune as a merchant, first
in Europe and then in the California gold-rush of 1849. He became an
American citizen. Then he retired and had both money and time to test
an old idea of his. He believed that the heroes of ancient Troy and
Mycenae were once real Trojans and Greeks. He proved it by going to
Turkey and Greece and digging up the remains of both cities.
Schliemann had the great good fortune to find rich and spectacular
treasures, and he also had the common sense to keep notes and make
descriptions of what he found. He proved beyond doubt that many ancient
city mounds can be _stratified_. This means that there may be the
remains of many towns in a mound, one above another, like layers in a
cake.
You might like to have an idea of how mounds come to be in layers.
The original settlers may have chosen the spot because it had a good
spring and there were good fertile lands nearby, or perhaps because
it was close to some road or river or harbor. These settlers probably
built their town of stone and mud-brick. Finally, something would have
happened to the town--a flood, or a burning, or a raid by enemies--and
the walls of the houses would have fallen in or would have melted down
as mud in the rain. Nothing would have remained but the mud and debris
of a low mound of _one_ layer.
The second settlers would have wanted the spot for the same reasons
the first settlers did--good water, land, and roads. Also, the second
settlers would have found a nice low mound to build their houses on,
a protection from floods. But again, something would finally have
happened to the second town, and the walls of _its_ houses would have
come tumbling down. This makes the _second_ layer. And so on....
In Syria I once had the good fortune to dig on a large mound that had
no less than fifteen layers. Also, most of the layers were thick, and
there were signs of rebuilding and repairs within each layer. The mound
was more than a hundred feet high. In each layer, the building material
used had been a soft, unbaked mud-brick, and most of the debris
consisted of fallen or rain-melted mud from these mud-bricks.
This idea of _stratification_, like the cake layers, was already a
familiar one to the geologists by Schliemanns time. They could show
that their lowest layer of rock was oldest or earliest, and that the
overlying layers became more recent as one moved upward. Schliemanns
digging proved the same thing at Troy. His first (lowest and earliest)
city had at least nine layers above it; he thought that the second
layer contained the remains of Homers Troy. We now know that Homeric
Troy was layer VIIa from the bottom; also, we count eleven layers or
sub-layers in total.
Schliemanns work marks the beginnings of modern archeology. Scholars
soon set out to dig on ancient sites, from Egypt to Central America.
ARCHEOLOGICAL INFORMATION
As time went on, the study of archeological materials--found either
by accident or by digging on purpose--began to show certain things.
Archeologists began to get ideas as to the kinds of objects that
belonged together. If you compared a mail-order catalogue of 1890 with
one of today, you would see a lot of differences. If you really studied
the two catalogues hard, you would also begin to see that certain
objects go together. Horseshoes and metal buggy tires and pieces of
harness would begin to fit into a picture with certain kinds of coal
stoves and furniture and china dishes and kerosene lamps. Our friend
the spark plug, and radios and electric refrigerators and light bulbs
would fit into a picture with different kinds of furniture and dishes
and tools. You wont be old enough to remember the kind of hats that
women wore in 1890, but youve probably seen pictures of them, and you
know very well they couldnt be worn with the fashions of today.
This is one of the ways that archeologists study their materials.
The various tools and weapons and jewelry, the pottery, the kinds
of houses, and even the ways of burying the dead tend to fit into
pictures. Some archeologists call all of the things that go together to
make such a picture an _assemblage_. The assemblage of the first layer
of Schliemanns Troy was as different from that of the seventh layer as
our 1900 mail-order catalogue is from the one of today.
The archeologists who came after Schliemann began to notice other
things and to compare them with occurrences in modern times. The
idea that people will buy better mousetraps goes back into very
ancient times. Today, if we make good automobiles or radios, we can
sell some of them in Turkey or even in Timbuktu. This means that a
few present-day types of American automobiles and radios form part
of present-day assemblages in both Turkey and Timbuktu. The total
present-day assemblage of Turkey is quite different from that of
Timbuktu or that of America, but they have at least some automobiles
and some radios in common.
Now these automobiles and radios will eventually wear out. Let us
suppose we could go to some remote part of Turkey or to Timbuktu in a
dream. We dont know what the date is, in our dream, but we see all
sorts of strange things and ways of living in both places. Nobody
tells us what the date is. But suddenly we see a 1936 Ford; so we
know that in our dream it has to be at least the year 1936, and only
as many years after that as we could reasonably expect a Ford to keep
in running order. The Ford would probably break down in twenty years
time, so the Turkish or Timbuktu assemblage were seeing in our dream
has to date at about A.D. 1936-56.
Archeologists not only date their ancient materials in this way; they
also see over what distances and between which peoples trading was
done. It turns out that there was a good deal of trading in ancient
times, probably all on a barter and exchange basis.
EVERYTHING BEGINS TO FIT TOGETHER
Now we need to pull these ideas all together and see the complicated
structure the archeologists can build with their materials.
Even the earliest archeologists soon found that there was a very long
range of prehistoric time which would yield only very simple things.
For this very long early part of prehistory, there was little to be
found but the flint tools which wandering, hunting and gathering
people made, and the bones of the wild animals they ate. Toward the
end of prehistoric time there was a general settling down with the
coming of agriculture, and all sorts of new things began to be made.
Archeologists soon got a general notion of what ought to appear with
what. Thus, it would upset a French prehistorian digging at the bottom
of a very early cave if he found a fine bronze sword, just as much as
it would upset him if he found a beer bottle. The people of his very
early cave layer simply could not have made bronze swords, which came
later, just as do beer bottles. Some accidental disturbance of the
layers of his cave must have happened.
With any luck, archeologists do their digging in a layered, stratified
site. They find the remains of everything that would last through
time, in several different layers. They know that the assemblage in
the bottom layer was laid down earlier than the assemblage in the next
layer above, and so on up to the topmost layer, which is the latest.
They look at the results of other digs and find that some other
archeologist 900 miles away has found ax-heads in his lowest layer,
exactly like the ax-heads of their fifth layer. This means that their
fifth layer must have been lived in at about the same time as was the
first layer in the site 200 miles away. It also may mean that the
people who lived in the two layers knew and traded with each other. Or
it could mean that they didnt necessarily know each other, but simply
that both traded with a third group at about the same time.
You can see that the more we dig and find, the more clearly the main
facts begin to stand out. We begin to be more sure of which people
lived at the same time, which earlier and which later. We begin to
know who traded with whom, and which peoples seemed to live off by
themselves. We begin to find enough skeletons in burials so that the
physical anthropologists can tell us what the people looked like. We
get animal bones, and a paleontologist may tell us they are all bones
of wild animals; or he may tell us that some or most of the bones are
those of domesticated animals, for instance, sheep or cattle, and
therefore the people must have kept herds.
More important than anything else--as our structure grows more
complicated and our materials increase--is the fact that a sort
of history of human activity does begin to appear. The habits or
traditions that men formed in the making of their tools and in the
ways they did things, begin to stand out for us. How characteristic
were these habits and traditions? What areas did they spread over?
How long did they last? We watch the different tools and the traces
of the way things were done--how the burials were arranged, what
the living-places were like, and so on. We wonder about the people
themselves, for the traces of habits and traditions are useful to us
only as clues to the men who once had them. So we ask the physical
anthropologists about the skeletons that we found in the burials. The
physical anthropologists tell us about the anatomy and the similarities
and differences which the skeletons show when compared with other
skeletons. The physical anthropologists are even working on a
method--chemical tests of the bones--that will enable them to discover
what the blood-type may have been. One thing is sure. We have never
found a group of skeletons so absolutely similar among themselves--so
cast from a single mould, so to speak--that we could claim to have a
pure race. I am sure we never shall.
We become particularly interested in any signs of change--when new
materials and tool types and ways of doing things replace old ones. We
watch for signs of social change and progress in one way or another.
We must do all this without one word of written history to aid us.
Everything we are concerned with goes back to the time _before_ men
learned to write. That is the prehistorians job--to find out what
happened before history began.
THE CHANGING WORLD in which Prehistoric Men Lived
[Illustration]
Mankind, well say, is at least a half million years old. It is very
hard to understand how long a time half a million years really is.
If we were to compare this whole length of time to one day, wed get
something like this: The present time is midnight, and Jesus was
born just five minutes and thirty-six seconds ago. Earliest history
began less than fifteen minutes ago. Everything before 11:45 was in
prehistoric time.
Or maybe we can grasp the length of time better in terms of
generations. As you know, primitive peoples tend to marry and have
children rather early in life. So suppose we say that twenty years
will make an average generation. At this rate there would be 25,000
generations in a half-million years. But our United States is much less
than ten generations old, twenty-five generations take us back before
the time of Columbus, Julius Caesar was alive just 100 generations ago,
David was king of Israel less than 150 generations ago, 250 generations
take us back to the beginning of written history. And there were 24,750
generations of men before written history began!
I should probably tell you that there is a new method of prehistoric
dating which would cut the earliest dates in my reckoning almost
in half. Dr. Cesare Emiliani, combining radioactive (C14) and
chemical (oxygen isotope) methods in the study of deep-sea borings,
has developed a system which would lower the total range of human
prehistory to about 300,000 years. The system is still too new to have
had general examination and testing. Hence, I have not used it in this
book; it would mainly affect the dates earlier than 25,000 years ago.
CHANGES IN ENVIRONMENT
The earth probably hasnt changed much in the last 5,000 years (250
generations). Men have built things on its surface and dug into it and
drawn boundaries on maps of it, but the places where rivers, lakes,
seas, and mountains now stand have changed very little.
In earlier times the earth looked very different. Geologists call the
last great geological period the _Pleistocene_. It began somewhere
between a half million and a million years ago, and was a time of great
changes. Sometimes we call it the Ice Age, for in the Pleistocene
there were at least three or four times when large areas of earth
were covered with glaciers. The reason for my uncertainty is that
while there seem to have been four major mountain or alpine phases of
glaciation, there may only have been three general continental phases
in the Old World.[2]
[2] This is a complicated affair and I do not want to bother you
with its details. Both the alpine and the continental ice sheets
seem to have had minor fluctuations during their _main_ phases,
and the advances of the later phases destroyed many of the
traces of the earlier phases. The general textbooks have tended
to follow the names and numbers established for the Alps early
in this century by two German geologists. I will not bother you
with the names, but there were _four_ major phases. It is the
second of these alpine phases which seems to fit the traces of
the earliest of the great continental glaciations. In this book,
I will use the four-part system, since it is the most familiar,
but will add the word _alpine_ so you may remember to make the
transition to the continental system if you wish to do so.
Glaciers are great sheets of ice, sometimes over a thousand feet
thick, which are now known only in Greenland and Antarctica and in
high mountains. During several of the glacial periods in the Ice Age,
the glaciers covered most of Canada and the northern United States and
reached down to southern England and France in Europe. Smaller ice
sheets sat like caps on the Rockies, the Alps, and the Himalayas. The
continental glaciation only happened north of the equator, however, so
remember that Ice Age is only half true.
As you know, the amount of water on and about the earth does not vary.
These large glaciers contained millions of tons of water frozen into
ice. Because so much water was frozen and contained in the glaciers,
the water level of lakes and oceans was lowered. Flooded areas were
drained and appeared as dry land. There were times in the Ice Age when
there was no English Channel, so that England was not an island, and a
land bridge at the Dardanelles probably divided the Mediterranean from
the Black Sea.
A very important thing for people living during the time of a
glaciation was the region adjacent to the glacier. They could not, of
course, live on the ice itself. The questions would be how close could
they live to it, and how would they have had to change their way of
life to do so.
GLACIERS CHANGE THE WEATHER
Great sheets of ice change the weather. When the front of a glacier
stood at Milwaukee, the weather must have been bitterly cold in
Chicago. The climate of the whole world would have been different, and
you can see how animals and men would have been forced to move from one
place to another in search of food and warmth.
On the other hand, it looks as if only a minor proportion of the whole
Ice Age was really taken up by times of glaciation. In between came
the _interglacial_ periods. During these times the climate around
Chicago was as warm as it is now, and sometimes even warmer. It may
interest you to know that the last great glacier melted away less than
10,000 years ago. Professor Ernst Antevs thinks we may be living in an
interglacial period and that the Ice Age may not be over yet. So if you
want to make a killing in real estate for your several hundred times
great-grandchildren, you might buy some land in the Arizona desert or
the Sahara.
We do not yet know just why the glaciers appeared and disappeared, as
they did. It surely had something to do with an increase in rainfall
and a fall in temperature. It probably also had to do with a general
tendency for the land to rise at the beginning of the Pleistocene. We
know there was some mountain-building at that time. Hence, rain-bearing
winds nourished the rising and cooler uplands with snow. An increase
in all three of these factors--if they came together--would only have
needed to be slight. But exactly why this happened we do not know.
The reason I tell you about the glaciers is simply to remind you of the
changing world in which prehistoric men lived. Their surroundings--the
animals and plants they used for food, and the weather they had to
protect themselves from--were always changing. On the other hand, this
change happened over so long a period of time and was so slow that
individual people could not have noticed it. Glaciers, about which they
probably knew nothing, moved in hundreds of miles to the north of them.
The people must simply have wandered ever more southward in search
of the plants and animals on which they lived. Or some men may have
stayed where they were and learned to hunt different animals and eat
different foods. Prehistoric men had to keep adapting themselves to new
environments and those who were most adaptive were most successful.
OTHER CHANGES
Changes took place in the men themselves as well as in the ways they
lived. As time went on, they made better tools and weapons. Then, too,
we begin to find signs of how they started thinking of other things
than food and the tools to get it with. We find that they painted on
the walls of caves, and decorated their tools; we find that they buried
their dead.
At about the time when the last great glacier was finally melting away,
men in the Near East made the first basic change in human economy.
They began to plant grain, and they learned to raise and herd certain
animals. This meant that they could store food in granaries and on the
hoof against the bad times of the year. This first really basic change
in mans way of living has been called the food-producing revolution.
By the time it happened, a modern kind of climate was beginning. Men
had already grown to look as they do now. Know-how in ways of living
had developed and progressed, slowly but surely, up to a point. It was
impossible for men to go beyond that point if they only hunted and
fished and gathered wild foods. Once the basic change was made--once
the food-producing revolution became effective--technology leaped ahead
and civilization and written history soon began.
Prehistoric Men THEMSELVES
[Illustration]
DO WE KNOW WHERE MAN ORIGINATED?
For a long time some scientists thought the cradle of mankind was in
central Asia. Other scientists insisted it was in Africa, and still
others said it might have been in Europe. Actually, we dont know
where it was. We dont even know that there was only _one_ cradle.
If we had to choose a cradle at this moment, we would probably say
Africa. But the southern portions of Asia and Europe may also have been
included in the general area. The scene of the early development of
mankind was certainly the Old World. It is pretty certain men didnt
reach North or South America until almost the end of the Ice Age--had
they done so earlier we would certainly have found some trace of them
by now.
The earliest tools we have yet found come from central and south
Africa. By the dating system Im using, these tools must be over
500,000 years old. There are now reports that a few such early tools
have been found--at the Sterkfontein cave in South Africa--along with
the bones of small fossil men called australopithecines.
Not all scientists would agree that the australopithecines were men,
or would agree that the tools were made by the australopithecines
themselves. For these sticklers, the earliest bones of men come from
the island of Java. The date would be about 450,000 years ago. So far,
we have not yet found the tools which we suppose these earliest men in
the Far East must have made.
Let me say it another way. How old are the earliest traces of men we
now have? Over half a million years. This was a time when the first
alpine glaciation was happening in the north. What has been found so
far? The tools which the men of those times made, in different parts
of Africa. It is now fairly generally agreed that the men who made
the tools were the australopithecines. There is also a more man-like
jawbone at Kanam in Kenya, but its find-spot has been questioned. The
next earliest bones we have were found in Java, and they may be almost
a hundred thousand years younger than the earliest African finds. We
havent yet found the tools of these early Javanese. Our knowledge of
tool-using in Africa spreads quickly as time goes on: soon after the
appearance of tools in the south we shall have them from as far north
as Algeria.
Very soon after the earliest Javanese come the bones of slightly more
developed people in Java, and the jawbone of a man who once lived in
what is now Germany. The same general glacial beds which yielded the
later Javanese bones and the German jawbone also include tools. These
finds come from the time of the second alpine glaciation.
So this is the situation. By the time of the end of the second alpine
or first continental glaciation (say 400,000 years ago) we have traces
of men from the extremes of the more southerly portions of the Old
World--South Africa, eastern Asia, and western Europe. There are also
some traces of men in the middle ground. In fact, Professor Franz
Weidenreich believed that creatures who were the immediate ancestors
of men had already spread over Europe, Africa, and Asia by the time
the Ice Age began. We certainly have no reason to disbelieve this, but
fortunate accidents of discovery have not yet given us the evidence to
prove it.
MEN AND APES
Many people used to get extremely upset at the ill-formed notion
that man descended from the apes. Such words were much more likely
to start fights or monkey trials than the correct notion that all
living animals, including man, ascended or evolved from a single-celled
organism which lived in the primeval seas hundreds of millions of years
ago. Men are mammals, of the order called Primates, and mans living
relatives are the great apes. Men didnt descend from the apes or
apes from men, and mankind must have had much closer relatives who have
since become extinct.
Men stand erect. They also walk and run on their two feet. Apes are
happiest in trees, swinging with their arms from branch to branch.
Few branches of trees will hold the mighty gorilla, although he still
manages to sleep in trees. Apes cant stand really erect in our sense,
and when they have to run on the ground, they use the knuckles of their
hands as well as their feet.
A key group of fossil bones here are the south African
australopithecines. These are called the _Australopithecinae_ or
man-apes or sometimes even ape-men. We do not _know_ that they were
directly ancestral to men but they can hardly have been so to apes.
Presently Ill describe them a bit more. The reason I mention them
here is that while they had brains no larger than those of apes, their
hipbones were enough like ours so that they must have stood erect.
There is no good reason to think they couldnt have walked as we do.
BRAINS, HANDS, AND TOOLS
Whether the australopithecines were our ancestors or not, the proper
ancestors of men must have been able to stand erect and to walk on
their two feet. Three further important things probably were involved,
next, before they could become men proper. These are:
1. The increasing size and development of the brain.
2. The increasing usefulness (specialization) of the thumb and hand.
3. The use of tools.
Nobody knows which of these three is most important, or which came
first. Most probably the growth of all three things was very much
blended together. If you think about each of the things, you will see
what I mean. Unless your hand is more flexible than a paw, and your
thumb will work against (or oppose) your fingers, you cant hold a tool
very well. But you wouldnt get the idea of using a tool unless you had
enough brain to help you see cause and effect. And it is rather hard to
see how your hand and brain would develop unless they had something to
practice on--like using tools. In Professor Krogmans words, the hand
must become the obedient servant of the eye and the brain. It is the
_co-ordination_ of these things that counts.
Many other things must have been happening to the bodies of the
creatures who were the ancestors of men. Our ancestors had to develop
organs of speech. More than that, they had to get the idea of letting
_certain sounds_ made with these speech organs have _certain meanings_.
All this must have gone very slowly. Probably everything was developing
little by little, all together. Men became men very slowly.
WHEN SHALL WE CALL MEN MEN?
What do I mean when I say men? People who looked pretty much as we
do, and who used different tools to do different things, are men to me.
Well probably never know whether the earliest ones talked or not. They
probably had vocal cords, so they could make sounds, but did they know
how to make sounds work as symbols to carry meanings? But if the fossil
bones look like our skeletons, and if we find tools which well agree
couldnt have been made by nature or by animals, then Id say we had
traces of _men_.
The australopithecine finds of the Transvaal and Bechuanaland, in
south Africa, are bound to come into the discussion here. Ive already
told you that the australopithecines could have stood upright and
walked on their two hind legs. They come from the very base of the
Pleistocene or Ice Age, and a few coarse stone tools have been found
with the australopithecine fossils. But there are three varieties
of the australopithecines and they last on until a time equal to
that of the second alpine glaciation. They are the best suggestion
we have yet as to what the ancestors of men _may_ have looked like.
They were certainly closer to men than to apes. Although their brain
size was no larger than the brains of modern apes their body size and
stature were quite small; hence, relative to their small size, their
brains were large. We have not been able to prove without doubt that
the australopithecines were _tool-making_ creatures, even though the
recent news has it that tools have been found with australopithecine
bones. The doubt as to whether the australopithecines used the tools
themselves goes like this--just suppose some man-like creature (whose
bones we have not yet found) made the tools and used them to kill
and butcher australopithecines. Hence a few experts tend to let
australopithecines still hang in limbo as man-apes.
THE EARLIEST MEN WE KNOW
Ill postpone talking about the tools of early men until the next
chapter. The men whose bones were the earliest of the Java lot have
been given the name _Meganthropus_. The bones are very fragmentary. We
would not understand them very well unless we had the somewhat later
Javanese lot--the more commonly known _Pithecanthropus_ or Java
man--against which to refer them for study. One of the less well-known
and earliest fragments, a piece of lower jaw and some teeth, rather
strongly resembles the lower jaws and teeth of the australopithecine
type. Was _Meganthropus_ a sort of half-way point between the
australopithecines and _Pithecanthropus_? It is still too early to say.
We shall need more finds before we can be definite one way or the other.
Java man, _Pithecanthropus_, comes from geological beds equal in age
to the latter part of the second alpine glaciation; the _Meganthropus_
finds refer to beds of the beginning of this glaciation. The first
finds of Java man were made in 1891-92 by Dr. Eugene Dubois, a Dutch
doctor in the colonial service. Finds have continued to be made. There
are now bones enough to account for four skulls. There are also four
jaws and some odd teeth and thigh bones. Java man, generally speaking,
was about five feet six inches tall, and didnt hold his head very
erect. His skull was very thick and heavy and had room for little more
than two-thirds as large a brain as we have. He had big teeth and a big
jaw and enormous eyebrow ridges.
No tools were found in the geological deposits where bones of Java man
appeared. There are some tools in the same general area, but they come
a bit later in time. One reason we accept the Java man as man--aside
from his general anatomical appearance--is that these tools probably
belonged to his near descendants.
Remember that there are several varieties of men in the whole early
Java lot, at least two of which are earlier than the _Pithecanthropus_,
Java man. Some of the earlier ones seem to have gone in for
bigness, in tooth-size at least. _Meganthropus_ is one of these
earlier varieties. As we said, he _may_ turn out to be a link to
the australopithecines, who _may_ or _may not_ be ancestral to men.
_Meganthropus_ is best understandable in terms of _Pithecanthropus_,
who appeared later in the same general area. _Pithecanthropus_ is
pretty well understandable from the bones he left us, and also because
of his strong resemblance to the fully tool-using cave-dwelling Peking
man, _Sinanthropus_, about whom we shall talk next. But you can see
that the physical anthropologists and prehistoric archeologists still
have a lot of work to do on the problem of earliest men.
PEKING MEN AND SOME EARLY WESTERNERS
The earliest known Chinese are called _Sinanthropus_, or Peking man,
because the finds were made near that city. In World War II, the United
States Marine guard at our Embassy in Peking tried to help get the
bones out of the city before the Japanese attack. Nobody knows where
these bones are now. The Red Chinese accuse us of having stolen them.
They were last seen on a dock-side at a Chinese port. But should you
catch a Marine with a sack of old bones, perhaps we could achieve peace
in Asia by returning them! Fortunately, there is a complete set of
casts of the bones.
Peking man lived in a cave in a limestone hill, made tools, cracked
animal bones to get the marrow out, and used fire. Incidentally, the
bones of Peking man were found because Chinese dig for what they call
dragon bones and dragon teeth. Uneducated Chinese buy these things
in their drug stores and grind them into powder for medicine. The
dragon teeth and bones are really fossils of ancient animals, and
sometimes of men. The people who supply the drug stores have learned
where to dig for strange bones and teeth. Paleontologists who get to
China go to the drug stores to buy fossils. In a roundabout way, this
is how the fallen-in cave of Peking man at Choukoutien was discovered.
Peking man was not quite as tall as Java man but he probably stood
straighter. His skull looked very much like that of the Java skull
except that it had room for a slightly larger brain. His face was less
brutish than was Java mans face, but this isnt saying much.
Peking man dates from early in the interglacial period following the
second alpine glaciation. He probably lived close to 350,000 years
ago. There are several finds to account for in Europe by about this
time, and one from northwest Africa. The very large jawbone found
near Heidelberg in Germany is doubtless even earlier than Peking man.
The beds where it was found are of second alpine glacial times, and
recently some tools have been said to have come from the same beds.
There is not much I need tell you about the Heidelberg jaw save that it
seems certainly to have belonged to an early man, and that it is very
big.
Another find in Germany was made at Steinheim. It consists of the
fragmentary skull of a man. It is very important because of its
relative completeness, but it has not yet been fully studied. The bone
is thick, but the back of the head is neither very low nor primitive,
and the face is also not primitive. The forehead does, however, have
big ridges over the eyes. The more fragmentary skull from Swanscombe in
England (p. 11) has been much more carefully studied. Only the top and
back of that skull have been found. Since the skull rounds up nicely,
it has been assumed that the face and forehead must have been quite
modern. Careful comparison with Steinheim shows that this was not
necessarily so. This is important because it bears on the question of
how early truly modern man appeared.
Recently two fragmentary jaws were found at Ternafine in Algeria,
northwest Africa. They look like the jaws of Peking man. Tools were
found with them. Since no jaws have yet been found at Steinheim or
Swanscombe, but the time is the same, one wonders if these people had
jaws like those of Ternafine.
WHAT HAPPENED TO JAVA AND PEKING MEN
Professor Weidenreich thought that there were at least a dozen ways in
which the Peking man resembled the modern Mongoloids. This would seem
to indicate that Peking man was really just a very early Chinese.
Several later fossil men have been found in the Java-Australian area.
The best known of these is the so-called Solo man. There are some finds
from Australia itself which we now know to be quite late. But it looks
as if we may assume a line of evolution from Java man down to the
modern Australian natives. During parts of the Ice Age there was a land
bridge all the way from Java to Australia.
TWO ENGLISHMEN WHO WERENT OLD
The older textbooks contain descriptions of two English finds which
were thought to be very old. These were called Piltdown (_Eoanthropus
dawsoni_) and Galley Hill. The skulls were very modern in appearance.
In 1948-49, British scientists began making chemical tests which proved
that neither of these finds is very old. It is now known that both
Piltdown man and the tools which were said to have been found with
him were part of an elaborate fake!
TYPICAL CAVE MEN
The next men we have to talk about are all members of a related group.
These are the Neanderthal group. Neanderthal man himself was found in
the Neander Valley, near Dsseldorf, Germany, in 1856. He was the first
human fossil to be recognized as such.
[Illustration: PRINCIPAL KNOWN TYPES OF FOSSIL MEN
CRO-MAGNON
NEANDERTHAL
MODERN SKULL
COMBE-CAPELLE
SINANTHROPUS
PITHECANTHROPUS]
Some of us think that the neanderthaloids proper are only those people
of western Europe who didnt get out before the beginning of the last
great glaciation, and who found themselves hemmed in by the glaciers
in the Alps and northern Europe. Being hemmed in, they intermarried
a bit too much and developed into a special type. Professor F. Clark
Howell sees it this way. In Europe, the earliest trace of men we
now know is the Heidelberg jaw. Evolution continued in Europe, from
Heidelberg through the Swanscombe and Steinheim types to a group of
pre-neanderthaloids. There are traces of these pre-neanderthaloids
pretty much throughout Europe during the third interglacial period--say
100,000 years ago. The pre-neanderthaloids are represented by such
finds as the ones at Ehringsdorf in Germany and Saccopastore in Italy.
I wont describe them for you, since they are simply less extreme than
the neanderthaloids proper--about half way between Steinheim and the
classic Neanderthal people.
Professor Howell believes that the pre-neanderthaloids who happened to
get caught in the pocket of the southwest corner of Europe at the onset
of the last great glaciation became the classic Neanderthalers. Out in
the Near East, Howell thinks, it is possible to see traces of people
evolving from the pre-neanderthaloid type toward that of fully modern
man. Certainly, we dont see such extreme cases of neanderthaloidism
outside of western Europe.
There are at least a dozen good examples in the main or classic
Neanderthal group in Europe. They date to just before and in the
earlier part of the last great glaciation (85,000 to 40,000 years ago).
Many of the finds have been made in caves. The cave men the movies
and the cartoonists show you are probably meant to be Neanderthalers.
Im not at all sure they dragged their women by the hair; the women
were probably pretty tough, too!
Neanderthal men had large bony heads, but plenty of room for brains.
Some had brain cases even larger than the average for modern man. Their
faces were heavy, and they had eyebrow ridges of bone, but the ridges
were not as big as those of Java man. Their foreheads were very low,
and they didnt have much chin. They were about five feet three inches
tall, but were heavy and barrel-chested. But the Neanderthalers didnt
slouch as much as theyve been blamed for, either.
One important thing about the Neanderthal group is that there is a fair
number of them to study. Just as important is the fact that we know
something about how they lived, and about some of the tools they made.
OTHER MEN CONTEMPORARY WITH THE NEANDERTHALOIDS
We have seen that the neanderthaloids seem to be a specialization
in a corner of Europe. What was going on elsewhere? We think that
the pre-neanderthaloid type was a generally widespread form of men.
From this type evolved other more or less extreme although generally
related men. The Solo finds in Java form one such case. Another was the
Rhodesian man of Africa, and the more recent Hopefield finds show more
of the general Rhodesian type. It is more confusing than it needs to be
if these cases outside western Europe are called neanderthaloids. They
lived during the same approximate time range but they were all somewhat
different-looking people.
EARLY MODERN MEN
How early is modern man (_Homo sapiens_), the wise man? Some people
have thought that he was very early, a few still think so. Piltdown
and Galley Hill, which were quite modern in anatomical appearance and
_supposedly_ very early in date, were the best evidence for very
early modern men. Now that Piltdown has been liquidated and Galley Hill
is known to be very late, what is left of the idea?
The backs of the skulls of the Swanscombe and Steinheim finds look
rather modern. Unless you pay attention to the face and forehead of the
Steinheim find--which not many people have--and perhaps also consider
the Ternafine jaws, you might come to the conclusion that the crown of
the Swanscombe head was that of a modern-like man.
Two more skulls, again without faces, are available from a French
cave site, Fontchevade. They come from the time of the last great
interglacial, as did the pre-neanderthaloids. The crowns of the
Fontchevade skulls also look quite modern. There is a bit of the
forehead preserved on one of these skulls and the brow-ridge is not
heavy. Nevertheless, there is a suggestion that the bones belonged to
an immature individual. In this case, his (or even more so, if _her_)
brow-ridges would have been weak anyway. The case for the Fontchevade
fossils, as modern type men, is little stronger than that for
Swanscombe, although Professor Vallois believes it a good case.
It seems to add up to the fact that there were people living in
Europe--before the classic neanderthaloids--who looked more modern,
in some features, than the classic western neanderthaloids did. Our
best suggestion of what men looked like--just before they became fully
modern--comes from a cave on Mount Carmel in Palestine.
THE FIRST MODERNS
Professor T. D. McCown and the late Sir Arthur Keith, who studied the
Mount Carmel bones, figured out that one of the two groups involved
was as much as 70 per cent modern. There were, in fact, two groups or
varieties of men in the Mount Carmel caves and in at least two other
Palestinian caves of about the same time. The time would be about that
of the onset of colder weather, when the last glaciation was beginning
in the north--say 75,000 years ago.
The 70 per cent modern group came from only one cave, Mugharet es-Skhul
(cave of the kids). The other group, from several caves, had bones of
men of the type weve been calling pre-neanderthaloid which we noted
were widespread in Europe and beyond. The tools which came with each
of these finds were generally similar, and McCown and Keith, and other
scholars since their study, have tended to assume that both the Skhul
group and the pre-neanderthaloid group came from exactly the same time.
The conclusion was quite natural: here was a population of men in the
act of evolving in two different directions. But the time may not be
exactly the same. It is very difficult to be precise, within say 10,000
years, for a time some 75,000 years ago. If the Skhul men are in fact
later than the pre-neanderthaloid group of Palestine, as some of us
think, then they show how relatively modern some men were--men who
lived at the same time as the classic Neanderthalers of the European
pocket.
Soon after the first extremely cold phase of the last glaciation, we
begin to get a number of bones of completely modern men in Europe.
We also get great numbers of the tools they made, and their living
places in caves. Completely modern skeletons begin turning up in caves
dating back to toward 40,000 years ago. The time is about that of the
beginning of the second phase of the last glaciation. These skeletons
belonged to people no different from many people we see today. Like
people today, not everybody looked alike. (The positions of the more
important fossil men of later Europe are shown in the chart on page
72.)
DIFFERENCES IN THE EARLY MODERNS
The main early European moderns have been divided into two groups, the
Cro-Magnon group and the Combe Capelle-Brnn group. Cro-Magnon people
were tall and big-boned, with large, long, and rugged heads. They
must have been built like many present-day Scandinavians. The Combe
Capelle-Brnn people were shorter; they had narrow heads and faces, and
big eyebrow-ridges. Of course we dont find the skin or hair of these
people. But there is little doubt they were Caucasoids (Whites).
Another important find came in the Italian Riviera, near Monte Carlo.
Here, in a cave near Grimaldi, there was a grave containing a woman
and a young boy, buried together. The two skeletons were first called
Negroid because some features of their bones were thought to resemble
certain features of modern African Negro bones. But more recently,
Professor E. A. Hooton and other experts questioned the use of the word
Negroid in describing the Grimaldi skeletons. It is true that nothing
is known of the skin color, hair form, or any other fleshy feature of
the Grimaldi people, so that the word Negroid in its usual meaning is
not proper here. It is also not clear whether the features of the bones
claimed to be Negroid are really so at all.
From a place called Wadjak, in Java, we have proto-Australoid skulls
which closely resemble those of modern Australian natives. Some of
the skulls found in South Africa, especially the Boskop skull, look
like those of modern Bushmen, but are much bigger. The ancestors of
the Bushmen seem to have once been very widespread south of the Sahara
Desert. True African Negroes were forest people who apparently expanded
out of the west central African area only in the last several thousand
years. Although dark in skin color, neither the Australians nor the
Bushmen are Negroes; neither the Wadjak nor the Boskop skulls are
Negroid.
As weve already mentioned, Professor Weidenreich believed that Peking
man was already on the way to becoming a Mongoloid. Anyway, the
Mongoloids would seem to have been present by the time of the Upper
Cave at Choukoutien, the _Sinanthropus_ find-spot.
WHAT THE DIFFERENCES MEAN
What does all this difference mean? It means that, at one moment in
time, within each different area, men tended to look somewhat alike.
From area to area, men tended to look somewhat different, just as
they do today. This is all quite natural. People _tended_ to mate
near home; in the anthropological jargon, they made up geographically
localized breeding populations. The simple continental division of
stocks--black = Africa, yellow = Asia, white = Europe--is too simple
a picture to fit the facts. People became accustomed to life in some
particular area within a continent (we might call it a natural area).
As they went on living there, they evolved towards some particular
physical variety. It would, of course, have been difficult to draw
a clear boundary between two adjacent areas. There must always have
been some mating across the boundaries in every case. One thing human
beings dont do, and never have done, is to mate for purity. It is
self-righteous nonsense when we try to kid ourselves into thinking that
they do.
I am not going to struggle with the whole business of modern stocks and
races. This is a book about prehistoric men, not recent historic or
modern men. My physical anthropologist friends have been very patient
in helping me to write and rewrite this chapter--I am not going to
break their patience completely. Races are their business, not mine,
and they must do the writing about races. I shall, however, give two
modern definitions of race, and then make one comment.
Dr. William G. Boyd, professor of Immunochemistry, School of
Medicine, Boston University: We may define a human race as a
population which differs significantly from other human populations
in regard to the frequency of one or more of the genes it
possesses.
Professor Sherwood L. Washburn, professor of Physical Anthropology,
Department of Anthropology, the University of California: A race
is a group of genetically similar populations, and races intergrade
because there are always intermediate populations.
My comment is that the ideas involved here are all biological: they
concern groups, _not_ individuals. Boyd and Washburn may differ a bit
on what they want to consider a population, but a population is a
group nevertheless, and genetics is biology to the hilt. Now a lot of
people still think of race in terms of how people dress or fix their
food or of other habits or customs they have. The next step is to talk
about racial purity. None of this has anything whatever to do with
race proper, which is a matter of the biology of groups.
Incidentally, Im told that if man very carefully _controls_
the breeding of certain animals over generations--dogs, cattle,
chickens--he might achieve a pure race of animals. But he doesnt do
it. Some unfortunate genetic trait soon turns up, so this has just as
carefully to be bred out again, and so on.
SUMMARY OF PRESENT KNOWLEDGE OF FOSSIL MEN
The earliest bones of men we now have--upon which all the experts
would probably agree--are those of _Meganthropus_, from Java, of about
450,000 years ago. The earlier australopithecines of Africa were
possibly not tool-users and may not have been ancestral to men at all.
But there is an alternate and evidently increasingly stronger chance
that some of them may have been. The Kanam jaw from Kenya, another
early possibility, is not only very incomplete but its find-spot is
very questionable.
Java man proper, _Pithecanthropus_, comes next, at about 400,000 years
ago, and the big Heidelberg jaw in Germany must be of about the same
date. Next comes Swanscombe in England, Steinheim in Germany, the
Ternafine jaws in Algeria, and Peking man, _Sinanthropus_. They all
date to the second great interglacial period, about 350,000 years ago.
Piltdown and Galley Hill are out, and with them, much of the starch
in the old idea that there were two distinct lines of development
in human evolution: (1) a line of paleoanthropic development from
Heidelberg to the Neanderthalers where it became extinct, and (2) a
very early modern line, through Piltdown, Galley Hill, Swanscombe, to
us. Swanscombe, Steinheim, and Ternafine are just as easily cases of
very early pre-neanderthaloids.
The pre-neanderthaloids were very widespread during the third
interglacial: Ehringsdorf, Saccopastore, some of the Mount Carmel
people, and probably Fontchevade are cases in point. A variety of
their descendants can be seen, from Java (Solo), Africa (Rhodesian
man), and about the Mediterranean and in western Europe. As the acute
cold of the last glaciation set in, the western Europeans found
themselves surrounded by water, ice, or bitter cold tundra. To vastly
over-simplify it, they bred in and became classic neanderthaloids.
But on Mount Carmel, the Skhul cave-find with its 70 per cent modern
features shows what could happen elsewhere at the same time.
Lastly, from about 40,000 or 35,000 years ago--the time of the onset
of the second phase of the last glaciation--we begin to find the fully
modern skeletons of men. The modern skeletons differ from place to
place, just as different groups of men living in different places still
look different.
What became of the Neanderthalers? Nobody can tell me for sure. Ive a
hunch they were simply bred out again when the cold weather was over.
Many Americans, as the years go by, are no longer ashamed to claim they
have Indian blood in their veins. Give us a few more generations
and there will not be very many other Americans left to whom we can
brag about it. It certainly isnt inconceivable to me to imagine a
little Cro-Magnon boy bragging to his friends about his tough, strong,
Neanderthaler great-great-great-great-grandfather!
Cultural BEGINNINGS
[Illustration]
Men, unlike the lower animals, are made up of much more than flesh and
blood and bones; for men have culture.
WHAT IS CULTURE?
Culture is a word with many meanings. The doctors speak of making a
culture of a certain kind of bacteria, and ants are said to have a
culture. Then there is the Emily Post kind of culture--you say a
person is cultured, or that he isnt, depending on such things as
whether or not he eats peas with his knife.
The anthropologists use the word too, and argue heatedly over its finer
meanings; but they all agree that every human being is part of or has
some kind of culture. Each particular human group has a particular
culture; that is one of the ways in which we can tell one group of
men from another. In this sense, a CULTURE means the way the members
of a group of people think and believe and live, the tools they make,
and the way they do things. Professor Robert Redfield says a culture
is an organized or formalized body of conventional understandings.
Conventional understandings means the whole set of rules, beliefs,
and standards which a group of people lives by. These understandings
show themselves in art, and in the other things a people may make and
do. The understandings continue to last, through tradition, from one
generation to another. They are what really characterize different
human groups.
SOME CHARACTERISTICS OF CULTURE
A culture lasts, although individual men in the group die off. On
the other hand, a culture changes as the different conventions and
understandings change. You could almost say that a culture lives in the
minds of the men who have it. But people are not born with it; they
get it as they grow up. Suppose a day-old Hungarian baby is adopted by
a family in Oshkosh, Wisconsin, and the child is not told that he is
Hungarian. He will grow up with no more idea of Hungarian culture than
anyone else in Oshkosh.
So when I speak of ancient Egyptian culture, I mean the whole body
of understandings and beliefs and knowledge possessed by the ancient
Egyptians. I mean their beliefs as to why grain grew, as well as their
ability to make tools with which to reap the grain. I mean their
beliefs about life after death. What I am thinking about as culture is
a thing which lasted in time. If any one Egyptian, even the Pharaoh,
died, it didnt affect the Egyptian culture of that particular moment.
PREHISTORIC CULTURES
For that long period of mans history that is all prehistory, we have
no written descriptions of cultures. We find only the tools men made,
the places where they lived, the graves in which they buried their
dead. Fortunately for us, these tools and living places and graves all
tell us something about the ways these men lived and the things they
believed. But the story we learn of the very early cultures must be
only a very small part of the whole, for we find so few things. The
rest of the story is gone forever. We have to do what we can with what
we find.
For all of the time up to about 75,000 years ago, which was the time
of the classic European Neanderthal group of men, we have found few
cave-dwelling places of very early prehistoric men. First, there is the
fallen-in cave where Peking man was found, near Peking. Then there are
two or three other _early_, but not _very early_, possibilities. The
finds at the base of the French cave of Fontchevade, those in one of
the Makapan caves in South Africa, and several open sites such as Dr.
L. S. B. Leakeys Olorgesailie in Kenya doubtless all lie earlier than
the time of the main European Neanderthal group, but none are so early
as the Peking finds.
You can see that we know very little about the home life of earlier
prehistoric men. We find different kinds of early stone tools, but we
cant even be really sure which tools may have been used together.
WHY LITTLE HAS LASTED FROM EARLY TIMES
Except for the rare find-spots mentioned above, all our very early
finds come from geological deposits, or from the wind-blown surfaces
of deserts. Here is what the business of geological deposits really
means. Let us say that a group of people was living in England about
300,000 years ago. They made the tools they needed, lived in some sort
of camp, almost certainly built fires, and perhaps buried their dead.
While the climate was still warm, many generations may have lived in
the same place, hunting, and gathering nuts and berries; but after some
few thousand years, the weather began very gradually to grow colder.
These early Englishmen would not have known that a glacier was forming
over northern Europe. They would only have noticed that the animals
they hunted seemed to be moving south, and that the berries grew larger
toward the south. So they would have moved south, too.
The camp site they left is the place we archeologists would really have
liked to find. All of the different tools the people used would have
been there together--many broken, some whole. The graves, and traces
of fire, and the tools would have been there. But the glacier got
there first! The front of this enormous sheet of ice moved down over
the country, crushing and breaking and plowing up everything, like a
gigantic bulldozer. You can see what happened to our camp site.
Everything the glacier couldnt break, it pushed along in front of it
or plowed beneath it. Rocks were ground to gravel, and soil was caught
into the ice, which afterwards melted and ran off as muddy water. Hard
tools of flint sometimes remained whole. Human bones werent so hard;
its a wonder _any_ of them lasted. Gushing streams of melt water
flushed out the debris from underneath the glacier, and water flowed
off the surface and through great crevasses. The hard materials these
waters carried were even more rolled and ground up. Finally, such
materials were dropped by the rushing waters as gravels, miles from
the front of the glacier. At last the glacier reached its greatest
extent; then it melted backward toward the north. Debris held in the
ice was dropped where the ice melted, or was flushed off by more melt
water. When the glacier, leaving the land, had withdrawn to the sea,
great hunks of ice were broken off as icebergs. These icebergs probably
dropped the materials held in their ice wherever they floated and
melted. There must be many tools and fragmentary bones of prehistoric
men on the bottom of the Atlantic Ocean and the North Sea.
Remember, too, that these glaciers came and went at least three or four
times during the Ice Age. Then you will realize why the earlier things
we find are all mixed up. Stone tools from one camp site got mixed up
with stone tools from many other camp sites--tools which may have been
made tens of thousands or more years apart. The glaciers mixed them
all up, and so we cannot say which particular sets of tools belonged
together in the first place.
EOLITHS
But what sort of tools do we find earliest? For almost a century,
people have been picking up odd bits of flint and other stone in the
oldest Ice Age gravels in England and France. It is now thought these
odd bits of stone werent actually worked by prehistoric men. The
stones were given a name, _eoliths_, or dawn stones. You can see them
in many museums; but you can be pretty sure that very few of them were
actually fashioned by men.
It is impossible to pick out eoliths that seem to be made in any
one _tradition_. By tradition I mean a set of habits for making one
kind of tool for some particular job. No two eoliths look very much
alike: tools made as part of some one tradition all look much alike.
Now its easy to suppose that the very earliest prehistoric men picked
up and used almost any sort of stone. This wouldnt be surprising; you
and I do it when we go camping. In other words, some of these eoliths
may actually have been used by prehistoric men. They must have used
anything that might be handy when they needed it. We could have figured
that out without the eoliths.
THE ROAD TO STANDARDIZATION
Reasoning from what we know or can easily imagine, there should have
been three major steps in the prehistory of tool-making. The first step
would have been simple _utilization_ of what was at hand. This is the
step into which the eoliths would fall. The second step would have
been _fashioning_--the haphazard preparation of a tool when there was a
need for it. Probably many of the earlier pebble tools, which I shall
describe next, fall into this group. The third step would have been
_standardization_. Here, men began to make tools according to certain
set traditions. Counting the better-made pebble tools, there are four
such traditions or sets of habits for the production of stone tools in
earliest prehistoric times. Toward the end of the Pleistocene, a fifth
tradition appears.
PEBBLE TOOLS
At the beginning of the last chapter, youll remember that I said there
were tools from very early geological beds. The earliest bones of men
have not yet been found in such early beds although the Sterkfontein
australopithecine cave approaches this early date. The earliest tools
come from Africa. They date back to the time of the first great
alpine glaciation and are at least 500,000 years old. The earliest
ones are made of split pebbles, about the size of your fist or a bit
bigger. They go under the name of pebble tools. There are many natural
exposures of early Pleistocene geological beds in Africa, and the
prehistoric archeologists of south and central Africa have concentrated
on searching for early tools. Other finds of early pebble tools have
recently been made in Algeria and Morocco.
[Illustration: SOUTH AFRICAN PEBBLE TOOL]
There are probably early pebble tools to be found in areas of the
Old World besides Africa; in fact, some prehistorians already claim
to have identified a few. Since the forms and the distinct ways of
making the earlier pebble tools had not yet sufficiently jelled into
a set tradition, they are difficult for us to recognize. It is not
so difficult, however, if there are great numbers of possibles
available. A little later in time the tradition becomes more clearly
set, and pebble tools are easier to recognize. So far, really large
collections of pebble tools have only been found and examined in Africa.
CORE-BIFACE TOOLS
The next tradition well look at is the _core_ or biface one. The tools
are large pear-shaped pieces of stone trimmed flat on the two opposite
sides or faces. Hence biface has been used to describe these tools.
The front view is like that of a pear with a rather pointed top, and
the back view looks almost exactly the same. Look at them side on, and
you can see that the front and back faces are the same and have been
trimmed to a thin tip. The real purpose in trimming down the two faces
was to get a good cutting edge all around. You can see all this in the
illustration.
[Illustration: ABBEVILLIAN BIFACE]
We have very little idea of the way in which these core-bifaces were
used. They have been called hand axes, but this probably gives the
wrong idea, for an ax, to us, is not a pointed tool. All of these early
tools must have been used for a number of jobs--chopping, scraping,
cutting, hitting, picking, and prying. Since the core-bifaces tend to
be pointed, it seems likely that they were used for hitting, picking,
and prying. But they have rough cutting edges, so they could have been
used for chopping, scraping, and cutting.
FLAKE TOOLS
The third tradition is the _flake_ tradition. The idea was to get a
tool with a good cutting edge by simply knocking a nice large flake off
a big block of stone. You had to break off the flake in such a way that
it was broad and thin, and also had a good sharp cutting edge. Once you
really got on to the trick of doing it, this was probably a simpler way
to make a good cutting tool than preparing a biface. You have to know
how, though; Ive tried it and have mashed my fingers more than once.
The flake tools look as if they were meant mainly for chopping,
scraping, and cutting jobs. When one made a flake tool, the idea seems
to have been to produce a broad, sharp, cutting edge.
[Illustration: CLACTONIAN FLAKE]
The core-biface and the flake traditions were spread, from earliest
times, over much of Europe, Africa, and western Asia. The map on page
52 shows the general area. Over much of this great region there was
flint. Both of these traditions seem well adapted to flint, although
good core-bifaces and flakes were made from other kinds of stone,
especially in Africa south of the Sahara.
CHOPPERS AND ADZE-LIKE TOOLS
The fourth early tradition is found in southern and eastern Asia, from
northwestern India through Java and Burma into China. Father Maringer
recently reported an early group of tools in Japan, which most resemble
those of Java, called Patjitanian. The prehistoric men in this general
area mostly used quartz and tuff and even petrified wood for their
stone tools (see illustration, p. 46).
This fourth early tradition is called the _chopper-chopping tool_
tradition. It probably has its earliest roots in the pebble tool
tradition of African type. There are several kinds of tools in this
tradition, but all differ from the western core-bifaces and flakes.
There are broad, heavy scrapers or cleavers, and tools with an
adze-like cutting edge. These last-named tools are called hand adzes,
just as the core-bifaces of the west have often been called hand
axes. The section of an adze cutting edge is ? shaped; the section of
an ax is < shaped.
[Illustration: ANYATHIAN ADZE-LIKE TOOL]
There are also pointed pebble tools. Thus the tool kit of these early
south and east Asiatic peoples seems to have included tools for doing
as many different jobs as did the tools of the Western traditions.
Dr. H. L. Movius has emphasized that the tools which were found in the
Peking cave with Peking man belong to the chopper-tool tradition. This
is the only case as yet where the tools and the man have been found
together from very earliest times--if we except Sterkfontein.
DIFFERENCES WITHIN THE TOOL-MAKING TRADITIONS
The latter three great traditions in the manufacture of stone
tools--and the less clear-cut pebble tools before them--are all we have
to show of the cultures of the men of those times. Changes happened in
each of the traditions. As time went on, the tools in each tradition
were better made. There could also be slight regional differences in
the tools within one tradition. Thus, tools with small differences, but
all belonging to one tradition, can be given special group (facies)
names.
This naming of special groups has been going on for some time. Here are
some of these names, since you may see them used in museum displays
of flint tools, or in books. Within each tradition of tool-making
(save the chopper tools), the earliest tool type is at the bottom
of the list, just as it appears in the lowest beds of a geological
stratification.[3]
[3] Archeologists usually make their charts and lists with the
earliest materials at the bottom and the latest on top, since
this is the way they find them in the ground.
Chopper tool (all about equally early):
Anyathian (Burma)
Choukoutienian (China)
Patjitanian (Java)
Soan (India)
Flake:
Typical Mousterian
Levalloiso-Mousterian
Levalloisian
Tayacian
Clactonian (localized in England)
Core-biface:
Some blended elements in Mousterian
Micoquian (= Acheulean 6 and 7)
Acheulean
Abbevillian (once called Chellean)
Pebble tool:
Oldowan
Ain Hanech
pre-Stellenbosch
Kafuan
The core-biface and the flake traditions appear in the chart (p. 65).
The early archeologists had many of the tool groups named before they
ever realized that there were broader tool preparation traditions. This
was understandable, for in dealing with the mixture of things that come
out of glacial gravels the easiest thing to do first is to isolate
individual types of tools into groups. First you put a bushel-basketful
of tools on a table and begin matching up types. Then you give names to
the groups of each type. The groups and the types are really matters of
the archeologists choice; in real life, they were probably less exact
than the archeologists lists of them. We now know pretty well in which
of the early traditions the various early groups belong.
THE MEANING OF THE DIFFERENT TRADITIONS
What do the traditions really mean? I see them as the standardization
of ways to make tools for particular jobs. We may not know exactly what
job the maker of a particular core-biface or flake tool had in mind. We
can easily see, however, that he already enjoyed a know-how, a set of
persistent habits of tool preparation, which would always give him the
same type of tool when he wanted to make it. Therefore, the traditions
show us that persistent habits already existed for the preparation of
one type of tool or another.
This tells us that one of the characteristic aspects of human culture
was already present. There must have been, in the minds of these
early men, a notion of the ideal type of tool for a particular job.
Furthermore, since we find so many thousands upon thousands of tools
of one type or another, the notion of the ideal types of tools _and_
the know-how for the making of each type must have been held in common
by many men. The notions of the ideal types and the know-how for their
production must have been passed on from one generation to another.
I could even guess that the notions of the ideal type of one or the
other of these tools stood out in the minds of men of those times
somewhat like a symbol of perfect tool for good job. If this were
so--remember its only a wild guess of mine--then men were already
symbol users. Now lets go on a further step to the fact that the words
men speak are simply sounds, each different sound being a symbol for a
different meaning. If standardized tool-making suggests symbol-making,
is it also possible that crude word-symbols were also being made? I
suppose that it is not impossible.
There may, of course, be a real question whether tool-utilizing
creatures--our first step, on page 42--were actually men. Other
animals utilize things at hand as tools. The tool-fashioning creature
of our second step is more suggestive, although we may not yet feel
sure that many of the earlier pebble tools were man-made products. But
with the step to standardization and the appearance of the traditions,
I believe we must surely be dealing with the traces of culture-bearing
_men_. The conventional understandings which Professor Redfields
definition of culture suggests are now evidenced for us in the
persistent habits for the preparation of stone tools. Were we able to
see the other things these prehistoric men must have made--in materials
no longer preserved for the archeologist to find--I believe there would
be clear signs of further conventional understandings. The men may have
been physically primitive and pretty shaggy in appearance, but I think
we must surely call them men.
AN OLDER INTERPRETATION OF THE WESTERN TRADITIONS
In the last chapter, I told you that many of the older archeologists
and human paleontologists used to think that modern man was very old.
The supposed ages of Piltdown and Galley Hill were given as evidence
of the great age of anatomically modern man, and some interpretations
of the Swanscombe and Fontchevade fossils were taken to support
this view. The conclusion was that there were two parallel lines or
phyla of men already present well back in the Pleistocene. The
first of these, the more primitive or paleoanthropic line, was
said to include Heidelberg, the proto-neanderthaloids and classic
Neanderthal. The more anatomically modern or neanthropic line was
thought to consist of Piltdown and the others mentioned above. The
Neanderthaler or paleoanthropic line was thought to have become extinct
after the first phase of the last great glaciation. Of course, the
modern or neanthropic line was believed to have persisted into the
present, as the basis for the worlds population today. But with
Piltdown liquidated, Galley Hill known to be very late, and Swanscombe
and Fontchevade otherwise interpreted, there is little left of the
so-called parallel phyla theory.
While the theory was in vogue, however, and as long as the European
archeological evidence was looked at in one short-sighted way, the
archeological materials _seemed_ to fit the parallel phyla theory. It
was simply necessary to believe that the flake tools were made only
by the paleoanthropic Neanderthaler line, and that the more handsome
core-biface tools were the product of the neanthropic modern-man line.
Remember that _almost_ all of the early prehistoric European tools
came only from the redeposited gravel beds. This means that the tools
were not normally found in the remains of camp sites or work shops
where they had actually been dropped by the men who made and used
them. The tools came, rather, from the secondary hodge-podge of the
glacial gravels. I tried to give you a picture of the bulldozing action
of glaciers (p. 40) and of the erosion and weathering that were
side-effects of a glacially conditioned climate on the earths surface.
As we said above, if one simply plucks tools out of the redeposited
gravels, his natural tendency is to type the tools by groups, and to
think that the groups stand for something _on their own_.
In 1906, M. Victor Commont actually made a rare find of what seems
to have been a kind of workshop site, on a terrace above the Somme
river in France. Here, Commont realized, flake tools appeared clearly
in direct association with core-biface tools. Few prehistorians paid
attention to Commont or his site, however. It was easier to believe
that flake tools represented a distinct culture and that this
culture was that of the Neanderthaler or paleoanthropic line, and
that the core-bifaces stood for another culture which was that of the
supposed early modern or neanthropic line. Of course, I am obviously
skipping many details here. Some later sites with Neanderthal fossils
do seem to have only flake tools, but other such sites have both types
of tools. The flake tools which appeared _with_ the core-bifaces
in the Swanscombe gravels were never made much of, although it
was embarrassing for the parallel phyla people that Fontchevade
ran heavily to flake tools. All in all, the parallel phyla theory
flourished because it seemed so neat and easy to understand.
TRADITIONS ARE TOOL-MAKING HABITS, NOT CULTURES
In case you think I simply enjoy beating a dead horse, look in any
standard book on prehistory written twenty (or even ten) years ago, or
in most encyclopedias. Youll find that each of the individual tool
types, of the West, at least, was supposed to represent a culture.
The cultures were believed to correspond to parallel lines of human
evolution.
In 1937, Mr. Harper Kelley strongly re-emphasized the importance
of Commonts workshop site and the presence of flake tools with
core-bifaces. Next followed Dr. Movius clear delineation of the
chopper-chopping tool tradition of the Far East. This spoiled the nice
symmetry of the flake-tool = paleoanthropic, core-biface = neanthropic
equations. Then came increasing understanding of the importance of
the pebble tools in Africa, and the location of several more workshop
sites there, especially at Olorgesailie in Kenya. Finally came the
liquidation of Piltdown and the deflation of Galley Hills date. So it
is at last possible to picture an individual prehistoric man making a
flake tool to do one job and a core-biface tool to do another. Commont
showed us this picture in 1906, but few believed him.
[Illustration: DISTRIBUTION OF TOOL-PREPARATION TRADITIONS
Time approximately 100,000 years ago]
There are certainly a few cases in which flake tools did appear with
few or no core-bifaces. The flake-tool group called Clactonian in
England is such a case. Another good, but certainly later case is
that of the cave on Mount Carmel in Palestine, where the blended
pre-neanderthaloid, 70 per cent modern-type skulls were found. Here, in
the same level with the skulls, were 9,784 flint tools. Of these, only
three--doubtless strays--were core-bifaces; all the rest were flake
tools or flake chips. We noted above how the Fontchevade cave ran to
flake tools. The only conclusion I would draw from this is that times
and circumstances did exist in which prehistoric men needed only flake
tools. So they only made flake tools for those particular times and
circumstances.
LIFE IN EARLIEST TIMES
What do we actually know of life in these earliest times? In the
glacial gravels, or in the terrace gravels of rivers once swollen by
floods of melt water or heavy rains, or on the windswept deserts, we
find stone tools. The earliest and coarsest of these are the pebble
tools. We do not yet know what the men who made them looked like,
although the Sterkfontein australopithecines probably give us a good
hint. Then begin the more formal tool preparation traditions of the
west--the core-bifaces and the flake tools--and the chopper-chopping
tool series of the farther east. There is an occasional roughly worked
piece of bone. From the gravels which yield the Clactonian flakes of
England comes the fire-hardened point of a wooden spear. There are
also the chance finds of the fossil human bones themselves, of which
we spoke in the last chapter. Aside from the cave of Peking man, none
of the earliest tools have been found in caves. Open air or workshop
sites which do not seem to have been disturbed later by some geological
agency are very rare.
The chart on page 65 shows graphically what the situation in
west-central Europe seems to have been. It is not yet certain whether
there were pebble tools there or not. The Fontchevade cave comes
into the picture about 100,000 years ago or more. But for the earlier
hundreds of thousands of years--below the red-dotted line on the
chart--the tools we find come almost entirely from the haphazard
mixture within the geological contexts.
The stone tools of each of the earlier traditions are the simplest
kinds of all-purpose tools. Almost any one of them could be used for
hacking, chopping, cutting, and scraping; so the men who used them must
have been living in a rough and ready sort of way. They found or hunted
their food wherever they could. In the anthropological jargon, they
were food-gatherers, pure and simple.
Because of the mixture in the gravels and in the materials they
carried, we cant be sure which animals these men hunted. Bones of
the larger animals turn up in the gravels, but they could just as
well belong to the animals who hunted the men, rather than the other
way about. We dont know. This is why camp sites like Commonts and
Olorgesailie in Kenya are so important when we do find them. The animal
bones at Olorgesailie belonged to various mammals of extremely large
size. Probably they were taken in pit-traps, but there are a number of
groups of three round stones on the site which suggest that the people
used bolas. The South American Indians used three-ball bolas, with the
stones in separate leather bags connected by thongs. These were whirled
and then thrown through the air so as to entangle the feet of a fleeing
animal.
Professor F. Clark Howell recently returned from excavating another
important open air site at Isimila in Tanganyika. The site yielded
the bones of many fossil animals and also thousands of core-bifaces,
flakes, and choppers. But Howells reconstruction of the food-getting
habits of the Isimila people certainly suggests that the word hunting
is too dignified for what they did; scavenging would be much nearer
the mark.
During a great part of this time the climate was warm and pleasant. The
second interglacial period (the time between the second and third great
alpine glaciations) lasted a long time, and during much of this time
the climate may have been even better than ours is now. We dont know
that earlier prehistoric men in Europe or Africa lived in caves. They
may not have needed to; much of the weather may have been so nice that
they lived in the open. Perhaps they didnt wear clothes, either.
WHAT THE PEKING CAVE-FINDS TELL US
The one early cave-dwelling we have found is that of Peking man, in
China. Peking man had fire. He probably cooked his meat, or used
the fire to keep dangerous animals away from his den. In the cave
were bones of dangerous animals, members of the wolf, bear, and cat
families. Some of the cat bones belonged to beasts larger than tigers.
There were also bones of other wild animals: buffalo, camel, deer,
elephants, horses, sheep, and even ostriches. Seventy per cent of the
animals Peking man killed were fallow deer. Its much too cold and dry
in north China for all these animals to live there today. So this list
helps us know that the weather was reasonably warm, and that there was
enough rain to grow grass for the grazing animals. The list also helps
the paleontologists to date the find.
Peking man also seems to have eaten plant food, for there are hackberry
seeds in the debris of the cave. His tools were made of sandstone and
quartz and sometimes of a rather bad flint. As weve already seen, they
belong in the chopper-tool tradition. It seems fairly clear that some
of the edges were chipped by right-handed people. There are also many
split pieces of heavy bone. Peking man probably split them so he could
eat the bone marrow, but he may have used some of them as tools.
Many of these split bones were the bones of Peking men. Each one of the
skulls had already had the base broken out of it. In no case were any
of the bones resting together in their natural relation to one another.
There is nothing like a burial; all of the bones are scattered. Now
its true that animals could have scattered bodies that were not cared
for or buried. But splitting bones lengthwise and carefully removing
the base of a skull call for both the tools and the people to use them.
Its pretty clear who the people were. Peking man was a cannibal.
* * * * *
This rounds out about all we can say of the life and times of early
prehistoric men. In those days life was rough. You evidently had to
watch out not only for dangerous animals but also for your fellow men.
You ate whatever you could catch or find growing. But you had sense
enough to build fires, and you had already formed certain habits for
making the kinds of stone tools you needed. Thats about all we know.
But I think well have to admit that cultural beginnings had been made,
and that these early people were really _men_.
MORE EVIDENCE of Culture
[Illustration]
While the dating is not yet sure, the material that we get from caves
in Europe must go back to about 100,000 years ago; the time of the
classic Neanderthal group followed soon afterwards. We dont know why
there is no earlier material in the caves; apparently they were not
used before the last interglacial phase (the period just before the
last great glaciation). We know that men of the classic Neanderthal
group were living in caves from about 75,000 to 45,000 years ago.
New radioactive carbon dates even suggest that some of the traces of
culture well describe in this chapter may have lasted to about 35,000
years ago. Probably some of the pre-neanderthaloid types of men had
also lived in caves. But we have so far found their bones in caves only
in Palestine and at Fontchevade.
THE CAVE LAYERS
In parts of France, some peasants still live in caves. In prehistoric
time, many generations of people lived in them. As a result, many
caves have deep layers of debris. The first people moved in and lived
on the rock floor. They threw on the floor whatever they didnt want,
and they tracked in mud; nobody bothered to clean house in those days.
Their debris--junk and mud and garbage and what not--became packed
into a layer. As time went on, and generations passed, the layer grew
thicker. Then there might have been a break in the occupation of the
cave for a while. Perhaps the game animals got scarce and the people
moved away; or maybe the cave became flooded. Later on, other people
moved in and began making a new layer of their own on top of the first
layer. Perhaps this process of layering went on in the same cave for a
hundred thousand years; you can see what happened. The drawing on this
page shows a section through such a cave. The earliest layer is on the
bottom, the latest one on top. They go in order from bottom to top,
earliest to latest. This is the _stratification_ we talked about (p.
12).
[Illustration: SECTION OF SHELTER ON LOWER TERRACE, LE MOUSTIER]
While we may find a mix-up in caves, its not nearly as bad as the
mixing up that was done by glaciers. The animal bones and shells, the
fireplaces, the bones of men, and the tools the men made all belong
together, if they come from one layer. Thats the reason why the cave
of Peking man is so important. It is also the reason why the caves in
Europe and the Near East are so important. We can get an idea of which
things belong together and which lot came earliest and which latest.
In most cases, prehistoric men lived only in the mouths of caves.
They didnt like the dark inner chambers as places to live in. They
preferred rock-shelters, at the bases of overhanging cliffs, if there
was enough overhang to give shelter. When the weather was good, they no
doubt lived in the open air as well. Ill go on using the term cave
since its more familiar, but remember that I really mean rock-shelter,
as a place in which people actually lived.
The most important European cave sites are in Spain, France, and
central Europe; there are also sites in England and Italy. A few caves
are known in the Near East and Africa, and no doubt more sites will be
found when the out-of-the-way parts of Europe, Africa, and Asia are
studied.
AN INDUSTRY DEFINED
We have already seen that the earliest European cave materials are
those from the cave of Fontchevade. Movius feels certain that the
lowest materials here date back well into the third interglacial stage,
that which lay between the Riss (next to the last) and the Wrm I
(first stage of the last) alpine glaciations. This material consists
of an _industry_ of stone tools, apparently all made in the flake
tradition. This is the first time we have used the word industry.
It is useful to call all of the different tools found together in one
layer and made of _one kind of material_ an industry; that is, the
tools must be found together as men left them. Tools taken from the
glacial gravels (or from windswept desert surfaces or river gravels
or any geological deposit) are not together in this sense. We might
say the latter have only geological, not archeological context.
Archeological context means finding things just as men left them. We
can tell what tools go together in an industrial sense only if we
have archeological context.
Up to now, the only things we could have called industries were the
worked stone industry and perhaps the worked (?) bone industry of the
Peking cave. We could add some of the very clear cases of open air
sites, like Olorgesailie. We couldnt use the term for the stone tools
from the glacial gravels, because we do not know which tools belonged
together. But when the cave materials begin to appear in Europe, we can
begin to speak of industries. Most of the European caves of this time
contain industries of flint tools alone.
THE EARLIEST EUROPEAN CAVE LAYERS
Weve just mentioned the industry from what is said to be the oldest
inhabited cave in Europe; that is, the industry from the deepest layer
of the site at Fontchevade. Apparently it doesnt amount to much. The
tools are made of stone, in the flake tradition, and are very poorly
worked. This industry is called _Tayacian_. Its type tool seems to be
a smallish flake tool, but there are also larger flakes which seem to
have been fashioned for hacking. In fact, the type tool seems to be
simply a smaller edition of the Clactonian tool (pictured on p. 45).
None of the Fontchevade tools are really good. There are scrapers,
and more or less pointed tools, and tools that may have been used
for hacking and chopping. Many of the tools from the earlier glacial
gravels are better made than those of this first industry we see in
a European cave. There is so little of this material available that
we do not know which is really typical and which is not. You would
probably find it hard to see much difference between this industry and
a collection of tools of the type called Clactonian, taken from the
glacial gravels, especially if the Clactonian tools were small-sized.
The stone industry of the bottommost layer of the Mount Carmel cave,
in Palestine, where somewhat similar tools were found, has also been
called Tayacian.
I shall have to bring in many unfamiliar words for the names of the
industries. The industries are usually named after the places where
they were first found, and since these were in most cases in France,
most of the names which follow will be of French origin. However,
the names have simply become handles and are in use far beyond the
boundaries of France. It would be better if we had a non-place-name
terminology, but archeologists have not yet been able to agree on such
a terminology.
THE ACHEULEAN INDUSTRY
Both in France and in Palestine, as well as in some African cave
sites, the next layers in the deep caves have an industry in both the
core-biface and the flake traditions. The core-biface tools usually
make up less than half of all the tools in the industry. However,
the name of the biface type of tool is generally given to the whole
industry. It is called the _Acheulean_, actually a late form of it, as
Acheulean is also used for earlier core-biface tools taken from the
glacial gravels. In western Europe, the name used is _Upper Acheulean_
or _Micoquian_. The same terms have been borrowed to name layers E and
F in the Tabun cave, on Mount Carmel in Palestine.
The Acheulean core-biface type of tool is worked on two faces so as
to give a cutting edge all around. The outline of its front view may
be oval, or egg-shaped, or a quite pointed pear shape. The large
chip-scars of the Acheulean core-bifaces are shallow and flat. It is
suspected that this resulted from the removal of the chips with a
wooden club; the deep chip-scars of the earlier Abbevillian core-biface
came from beating the tool against a stone anvil. These tools are
really the best and also the final products of the core-biface
tradition. We first noticed the tradition in the early glacial gravels
(p. 43); now we see its end, but also its finest examples, in the
deeper cave levels.
The flake tools, which really make up the greater bulk of this
industry, are simple scrapers and chips with sharp cutting edges. The
habits used to prepare them must have been pretty much the same as
those used for at least one of the flake industries we shall mention
presently.
There is very little else in these early cave layers. We do not have
a proper industry of bone tools. There are traces of fire, and of
animal bones, and a few shells. In Palestine, there are many more
bones of deer than of gazelle in these layers; the deer lives in a
wetter climate than does the gazelle. In the European cave layers, the
animal bones are those of beasts that live in a warm climate. They
belonged in the last interglacial period. We have not yet found the
bones of fossil men definitely in place with this industry.
[Illustration: ACHEULEAN BIFACE]
FLAKE INDUSTRIES FROM THE CAVES
Two more stone industries--the _Levalloisian_ and the
_Mousterian_--turn up at approximately the same time in the European
cave layers. Their tools seem to be mainly in the flake tradition,
but according to some of the authorities their preparation also shows
some combination with the habits by which the core-biface tools were
prepared.
Now notice that I dont tell you the Levalloisian and the Mousterian
layers are both above the late Acheulean layers. Look at the cave
section (p. 57) and youll find that some Mousterian of Acheulean
tradition appears above some typical Mousterian. This means that
there may be some kinds of Acheulean industries that are later than
some kinds of Mousterian. The same is true of the Levalloisian.
There were now several different kinds of habits that men used in
making stone tools. These habits were based on either one or the other
of the two traditions--core-biface or flake--or on combinations of
the habits used in the preparation techniques of both traditions. All
were popular at about the same time. So we find that people who made
one kind of stone tool industry lived in a cave for a while. Then they
gave up the cave for some reason, and people with another industry
moved in. Then the first people came back--or at least somebody with
the same tool-making habits as the first people. Or maybe a third group
of tool-makers moved in. The people who had these different habits for
making their stone tools seem to have moved around a good deal. They no
doubt borrowed and exchanged tricks of the trade with each other. There
were no patent laws in those days.
The extremely complicated interrelationships of the different habits
used by the tool-makers of this range of time are at last being
systematically studied. M. Franois Bordes has developed a statistical
method of great importance for understanding these tool preparation
habits.
THE LEVALLOISIAN AND MOUSTERIAN
The easiest Levalloisian tool to spot is a big flake tool. The trick
in making it was to fashion carefully a big chunk of stone (called
the Levalloisian tortoise core, because it resembles the shape of
a turtle-shell) and then to whack this in such a way that a large
flake flew off. This large thin flake, with sharp cutting edges, is
the finished Levalloisian tool. There were various other tools in a
Levalloisian industry, but this is the characteristic _Levalloisian_
tool.
There are several typical Mousterian stone tools. Different from
the tools of the Levalloisian type, these were made from disc-like
cores. There are medium-sized flake side scrapers. There are also
some small pointed tools and some small hand axes. The last of these
tool types is often a flake worked on both of the flat sides (that
is, bifacially). There are also pieces of flint worked into the form
of crude balls. The pointed tools may have been fixed on shafts to
make short jabbing spears; the round flint balls may have been used as
bolas. Actually, we dont _know_ what either tool was used for. The
points and side scrapers are illustrated (pp. 64 and 66).
[Illustration: LEVALLOIS FLAKE]
THE MIXING OF TRADITIONS
Nowadays the archeologists are less and less sure of the importance
of any one specific tool type and name. Twenty years ago, they used
to speak simply of Acheulean or Levalloisian or Mousterian tools.
Now, more and more, _all_ of the tools from some one layer in a
cave are called an industry, which is given a mixed name. Thus we
have Levalloiso-Mousterian, and Acheuleo-Levalloisian, and even
Acheuleo-Mousterian (or Mousterian of Acheulean tradition). Bordes
systematic work is beginning to clear up some of our confusion.
The time of these late Acheuleo-Levalloiso-Mousterioid industries
is from perhaps as early as 100,000 years ago. It may have lasted
until well past 50,000 years ago. This was the time of the first
phase of the last great glaciation. It was also the time that the
classic group of Neanderthal men was living in Europe. A number of
the Neanderthal fossil finds come from these cave layers. Before the
different habits of tool preparation were understood it used to be
popular to say Neanderthal man was Mousterian man. I think this is
wrong. What used to be called Mousterian is now known to be a variety
of industries with tools of both core-biface and flake habits, and
so mixed that the word Mousterian used alone really doesnt mean
anything. The Neanderthalers doubtless understood the tool preparation
habits by means of which Acheulean, Levalloisian and Mousterian type
tools were produced. We also have the more modern-like Mount Carmel
people, found in a cave layer of Palestine with tools almost entirely
in the flake tradition, called Levalloiso-Mousterian, and the
Fontchevade-Tayacian (p. 59).
[Illustration: MOUSTERIAN POINT]
OTHER SUGGESTIONS OF LIFE IN THE EARLY CAVE LAYERS
Except for the stone tools, what do we know of the way men lived in the
time range after 100,000 to perhaps 40,000 years ago or even later?
We know that in the area from Europe to Palestine, at least some of
the people (some of the time) lived in the fronts of caves and warmed
themselves over fires. In Europe, in the cave layers of these times,
we find the bones of different animals; the bones in the lowest layers
belong to animals that lived in a warm climate; above them are the
bones of those who could stand the cold, like the reindeer and mammoth.
Thus, the meat diet must have been changing, as the glacier crept
farther south. Shells and possibly fish bones have lasted in these
cave layers, but there is not a trace of the vegetable foods and the
nuts and berries and other wild fruits that must have been eaten when
they could be found.
[Illustration: CHART SHOWING PRESENT UNDERSTANDING OF RELATIONSHIPS AND
SUCCESSION OF TOOL-PREPARATION TRADITIONS, INDUSTRIES, AND ASSEMBLAGES
OF WEST-CENTRAL EUROPE
Wavy lines indicate transitions in industrial habits. These transitions
are not yet understood in detail. The glacial and climatic scheme shown
is the alpine one.]
Bone tools have also been found from this period. Some are called
scrapers, and there are also long chisel-like leg-bone fragments
believed to have been used for skinning animals. Larger hunks of bone,
which seem to have served as anvils or chopping blocks, are fairly
common.
Bits of mineral, used as coloring matter, have also been found. We
dont know what the color was used for.
[Illustration: MOUSTERIAN SIDE SCRAPER]
There is a small but certain number of cases of intentional burials.
These burials have been found on the floors of the caves; in other
words, the people dug graves in the places where they lived. The holes
made for the graves were small. For this reason (or perhaps for some
other?) the bodies were in a curled-up or contracted position. Flint or
bone tools or pieces of meat seem to have been put in with some of the
bodies. In several cases, flat stones had been laid over the graves.
TOOLS FROM AFRICA AND ASIA ABOUT 100,000 YEARS AGO
Professor Movius characterizes early prehistoric Africa as a continent
showing a variety of stone industries. Some of these industries were
purely local developments and some were practically identical with
industries found in Europe at the same time. From northwest Africa
to Capetown--excepting the tropical rain forest region of the west
center--tools of developed Acheulean, Levalloisian, and Mousterian
types have been recognized. Often they are named after African place
names.
In east and south Africa lived people whose industries show a
development of the Levalloisian technique. Such industries are
called Stillbay. Another industry, developed on the basis of the
Acheulean technique, is called Fauresmith. From the northwest comes
an industry with tanged points and flake-blades; this is called the
Aterian. The tropical rain forest region contained people whose stone
tools apparently show adjustment to this peculiar environment; the
so-called Sangoan industry includes stone picks, adzes, core-bifaces
of specialized Acheulean type, and bifacial points which were probably
spearheads.
In western Asia, even as far as the east coast of India, the tools of
the Eurafrican core-biface and flake tool traditions continued to be
used. But in the Far East, as we noted in the last chapter, men had
developed characteristic stone chopper and chopping tools. This tool
preparation tradition--basically a pebble tool tradition--lasted to the
very end of the Ice Age.
When more intact open air sites such as that of an earlier time at
Olorgesailie, and more stratified cave sites are found and excavated
in Asia and Africa, we shall be able to get a more complete picture.
So far, our picture of the general cultural level of the Old World at
about 100,000 years ago--and soon afterwards--is best from Europe, but
it is still far from complete there, too.
CULTURE AT THE BEGINNING OF THE LAST GREAT GLACIAL PERIOD
The few things we have found must indicate only a very small part
of the total activities of the people who lived at the time. All of
the things they made of wood and bark, of skins, of anything soft,
are gone. The fact that burials were made, at least in Europe and
Palestine, is pretty clear proof that the people had some notion of a
life after death. But what this notion really was, or what gods (if
any) men believed in, we cannot know. Dr. Movius has also reminded me
of the so-called bear cults--cases in which caves have been found which
contain the skulls of bears in apparently purposeful arrangement. This
might suggest some notion of hoarding up the spirits or the strength of
bears killed in the hunt. Probably the people lived in small groups,
as hunting and food-gathering seldom provide enough food for large
groups of people. These groups probably had some kind of leader or
chief. Very likely the rude beginnings of rules for community life
and politics, and even law, were being made. But what these were, we
do not know. We can only guess about such things, as we can only guess
about many others; for example, how the idea of a family must have been
growing, and how there may have been witch doctors who made beginnings
in medicine or in art, in the materials they gathered for their trade.
The stone tools help us most. They have lasted, and we can find
them. As they come to us, from this cave or that, and from this
layer or that, the tool industries show a variety of combinations
of the different basic habits or traditions of tool preparation.
This seems only natural, as the groups of people must have been very
small. The mixtures and blendings of the habits used in making stone
tools must mean that there were also mixtures and blends in many of
the other ideas and beliefs of these small groups. And what this
probably means is that there was no one _culture_ of the time. It is
certainly unlikely that there were simply three cultures, Acheulean,
Levalloisian, and Mousterian, as has been thought in the past.
Rather there must have been a great variety of loosely related cultures
at about the same stage of advancement. We could say, too, that here
we really begin to see, for the first time, that remarkable ability
of men to adapt themselves to a variety of conditions. We shall see
this adaptive ability even more clearly as time goes on and the record
becomes more complete.
Over how great an area did these loosely related cultures reach in
the time 75,000 to 45,000 or even as late as 35,000 years ago? We
have described stone tools made in one or another of the flake and
core-biface habits, for an enormous area. It covers all of Europe, all
of Africa, the Near East, and parts of India. It is perfectly possible
that the flake and core-biface habits lasted on after 35,000 years ago,
in some places outside of Europe. In northern Africa, for example, we
are certain that they did (see chart, p. 72).
On the other hand, in the Far East (China, Burma, Java) and in northern
India, the tools of the old chopper-tool tradition were still being
made. Out there, we must assume, there was a different set of loosely
related cultures. At least, there was a different set of loosely
related habits for the making of tools. But the men who made them must
have looked much like the men of the West. Their tools were different,
but just as useful.
As to what the men of the West looked like, Ive already hinted at all
we know so far (pp. 29 ff.). The Neanderthalers were present at
the time. Some more modern-like men must have been about, too, since
fossils of them have turned up at Mount Carmel in Palestine, and at
Teshik Tash, in Trans-caspian Russia. It is still too soon to know
whether certain combinations of tools within industries were made
only by certain physical types of men. But since tools of both the
core-biface and the flake traditions, and their blends, turn up from
South Africa to England to India, it is most unlikely that only one
type of man used only one particular habit in the preparation of tools.
What seems perfectly clear is that men in Africa and men in India were
making just as good tools as the men who lived in western Europe.
EARLY MODERNS
[Illustration]
From some time during the first inter-stadial of the last great
glaciation (say some time after about 40,000 years ago), we have
more accurate dates for the European-Mediterranean area and less
accurate ones for the rest of the Old World. This is probably
because the effects of the last glaciation have been studied in the
European-Mediterranean area more than they have been elsewhere.
A NEW TRADITION APPEARS
Something new was probably beginning to happen in the
European-Mediterranean area about 40,000 years ago, though all the
rest of the Old World seems to have been going on as it had been. I
cant be sure of this because the information we are using as a basis
for dates is very inaccurate for the areas outside of Europe and the
Mediterranean.
We can at least make a guess. In Egypt and north Africa, men were still
using the old methods of making stone tools. This was especially true
of flake tools of the Levalloisian type, save that they were growing
smaller and smaller as time went on. But at the same time, a new
tradition was becoming popular in westernmost Asia and in Europe. This
was the blade-tool tradition.
BLADE TOOLS
A stone blade is really just a long parallel-sided flake, as the
drawing shows. It has sharp cutting edges, and makes a very useful
knife. The real trick is to be able to make one. It is almost
impossible to make a blade out of any stone but flint or a natural
volcanic glass called obsidian. And even if you have flint or obsidian,
you first have to work up a special cone-shaped blade-core, from
which to whack off blades.
[Illustration: PLAIN BLADE]
You whack with a hammer stone against a bone or antler punch which is
directed at the proper place on the blade-core. The blade-core has to
be well supported or gripped while this is going on. To get a good
flint blade tool takes a great deal of know-how.
Remember that a tradition in stone tools means no more than that some
particular way of making the tools got started and lasted a long time.
Men who made some tools in one tradition or set of habits would also
make other tools for different purposes by means of another tradition
or set of habits. It was even possible for the two sets of habits to
become combined.
THE EARLIEST BLADE TOOLS
The oldest blade tools we have found were deep down in the layers of
the Mount Carmel caves, in Tabun Eb and Ea. Similar tools have been
found in equally early cave levels in Syria; their popularity there
seems to fluctuate a bit. Some more or less parallel-sided flakes are
known in the Levalloisian industry in France, but they are probably
no earlier than Tabun E. The Tabun blades are part of a local late
Acheulean industry, which is characterized by core-biface hand
axes, but which has many flake tools as well. Professor F. E.
Zeuner believes that this industry may be more than 120,000 years old;
actually its date has not yet been fixed, but it is very old--older
than the fossil finds of modern-like men in the same caves.
[Illustration: SUCCESSION OF ICE AGE FLINT TYPES, INDUSTRIES, AND
ASSEMBLAGES, AND OF FOSSIL MEN, IN NORTHWESTERN EURAFRASIA]
For some reason, the habit of making blades in Palestine and Syria was
interrupted. Blades only reappeared there at about the same time they
were first made in Europe, some time after 45,000 years ago; that is,
after the first phase of the last glaciation was ended.
[Illustration: BACKED BLADE]
We are not sure just where the earliest _persisting_ habits for the
production of blade tools developed. Impressed by the very early
momentary appearance of blades at Tabun on Mount Carmel, Professor
Dorothy A. Garrod first favored the Near East as a center of origin.
She spoke of some as yet unidentified Asiatic centre, which she
thought might be in the highlands of Iran or just beyond. But more
recent work has been done in this area, especially by Professor Coon,
and the blade tools do not seem to have an early appearance there. When
the blade tools reappear in the Syro-Palestinian area, they do so in
industries which also include Levalloiso-Mousterian flake tools. From
the point of view of form and workmanship, the blade tools themselves
are not so fine as those which seem to be making their appearance
in western Europe about the same time. There is a characteristic
Syro-Palestinian flake point, possibly a projectile tip, called the
Emiran, which is not known from Europe. The appearance of blade tools,
together with Levalloiso-Mousterian flakes, continues even after the
Emiran point has gone out of use.
It seems clear that the production of blade tools did not immediately
swamp the set of older habits in Europe, too; the use of flake
tools also continued there. This was not so apparent to the older
archeologists, whose attention was focused on individual tool types. It
is not, in fact, impossible--although it is certainly not proved--that
the technique developed in the preparation of the Levalloisian tortoise
core (and the striking of the Levalloisian flake from it) might have
followed through to the conical core and punch technique for the
production of blades. Professor Garrod is much impressed with the speed
of change during the later phases of the last glaciation, and its
probable consequences. She speaks of the greater number of industries
having enough individual character to be classified as distinct ...
since evolution now starts to outstrip diffusion. Her evolution here
is of course an industrial evolution rather than a biological one.
Certainly the people of Europe had begun to make blade tools during
the warm spell after the first phase of the last glaciation. By about
40,000 years ago blades were well established. The bones of the blade
tool makers weve found so far indicate that anatomically modern men
had now certainly appeared. Unfortunately, only a few fossil men have
so far been found from the very beginning of the blade tool range in
Europe (or elsewhere). What I certainly shall _not_ tell you is that
conquering bands of fine, strong, anatomically modern men, armed with
superior blade tools, came sweeping out of the East to exterminate the
lowly Neanderthalers. Even if we dont know exactly what happened, Id
lay a good bet it wasnt that simple.
We do know a good deal about different blade industries in Europe.
Almost all of them come from cave layers. There is a great deal of
complication in what we find. The chart (p. 72) tries to simplify
this complication; in fact, it doubtless simplifies it too much. But
it may suggest all the complication of industries which is going
on at this time. You will note that the upper portion of my much
simpler chart (p. 65) covers the same material (in the section
marked Various Blade-Tool Industries). That chart is certainly too
simplified.
You will realize that all this complication comes not only from
the fact that we are finding more material. It is due also to the
increasing ability of men to adapt themselves to a great variety of
situations. Their tools indicate this adaptiveness. We know there was
a good deal of climatic change at this time. The plants and animals
that men used for food were changing, too. The great variety of tools
and industries we now find reflect these changes and the ability of men
to keep up with the times. Now, for example, is the first time we are
sure that there are tools to _make_ other tools. They also show mens
increasing ability to adapt themselves.
SPECIAL TYPES OF BLADE TOOLS
The most useful tools that appear at this time were made from blades.
1. The backed blade. This is a knife made of a flint blade, with
one edge purposely blunted, probably to save the users fingers
from being cut. There are several shapes of backed blades (p.
73).
[Illustration: TWO BURINS]
2. The _burin_ or graver. The burin was the original chisel. Its
cutting edge is _transverse_, like a chisels. Some burins are
made like a screw-driver, save that burins are sharp. Others have
edges more like the blade of a chisel or a push plane, with
only one bevel. Burins were probably used to make slots in wood
and bone; that is, to make handles or shafts for other tools.
They must also be the tools with which much of the engraving on
bone (see p. 83) was done. There is a bewildering variety of
different kinds of burins.
[Illustration: TANGED POINT]
3. The tanged point. These stone points were used to tip arrows or
light spears. They were made from blades, and they had a long tang
at the bottom where they were fixed to the shaft. At the place
where the tang met the main body of the stone point, there was
a marked shoulder, the beginnings of a barb. Such points had
either one or two shoulders.
[Illustration: NOTCHED BLADE]
4. The notched or strangulated blade. Along with the points for
arrows or light spears must go a tool to prepare the arrow or
spear shaft. Today, such a tool would be called a draw-knife or
a spoke-shave, and this is what the notched blades probably are.
Our spoke-shaves have sharp straight cutting blades and really
shave. Notched blades of flint probably scraped rather than cut.
5. The awl, drill, or borer. These blade tools are worked out
to a spike-like point. They must have been used for making holes
in wood, bone, shell, skin, or other things.
[Illustration: DRILL OR AWL]
6. The end-scraper on a blade is a tool with one or both ends
worked so as to give a good scraping edge. It could have been used
to hollow out wood or bone, scrape hides, remove bark from trees,
and a number of other things (p. 78).
There is one very special type of flint tool, which is best known from
western Europe in an industry called the Solutrean. These tools were
usually made of blades, but the best examples are so carefully worked
on both sides (bifacially) that it is impossible to see the original
blade. This tool is
7. The laurel leaf point. Some of these tools were long and
dagger-like, and must have been used as knives or daggers. Others
were small, called willow leaf, and must have been mounted on
spear or arrow shafts. Another typical Solutrean tool is the
shouldered point. Both the laurel leaf and shouldered point
types are illustrated (see above and p. 79).
[Illustration: END-SCRAPER ON A BLADE]
[Illustration: LAUREL LEAF POINT]
The industries characterized by tools in the blade tradition also
yield some flake and core tools. We will end this list with two types
of tools that appear at this time. The first is made of a flake; the
second is a core tool.
[Illustration: SHOULDERED POINT]
8. The keel-shaped round scraper is usually small and quite round,
and has had chips removed up to a peak in the center. It is called
keel-shaped because it is supposed to look (when upside down)
like a section through a boat. Actually, it looks more like a tent
or an umbrella. Its outer edges are sharp all the way around, and
it was probably a general purpose scraping tool (see illustration,
p. 81).
9. The keel-shaped nosed scraper is a much larger and heavier tool
than the round scraper. It was made on a core with a flat bottom,
and has one nicely worked end or nose. Such tools are usually
large enough to be easily grasped, and probably were used like
push planes (see illustration, p. 81).
[Illustration: KEEL-SHAPED ROUND SCRAPER]
[Illustration: KEEL-SHAPED NOSED SCRAPER]
The stone tools (usually made of flint) we have just listed are among
the most easily recognized blade tools, although they show differences
in detail at different times. There are also many other kinds. Not
all of these tools appear in any one industry at one time. Thus the
different industries shown in the chart (p. 72) each have only some
of the blade tools weve just listed, and also a few flake tools. Some
industries even have a few core tools. The particular types of blade
tools appearing in one cave layer or another, and the frequency of
appearance of the different types, tell which industry we have in each
layer.
OTHER KINDS OF TOOLS
By this time in Europe--say from about 40,000 to about 10,000 years
ago--we begin to find other kinds of material too. Bone tools begin
to appear. There are knives, pins, needles with eyes, and little
double-pointed straight bars of bone that were probably fish-hooks. The
fish-line would have been fastened in the center of the bar; when the
fish swallowed the bait, the bar would have caught cross-wise in the
fishs mouth.
One quite special kind of bone tool is a long flat point for a light
spear. It has a deep notch cut up into the breadth of its base, and is
called a split-based bone point (p. 82). We know examples of bone
beads from these times, and of bone handles for flint tools. Pierced
teeth of some animals were worn as beads or pendants, but I am not sure
that elks teeth were worn this early. There are even spool-shaped
buttons or toggles.
[Illustration: SPLIT-BASED BONE POINT]
[Illustration: SPEAR-THROWER]
[Illustration: BONE HARPOON]
Antler came into use for tools, especially in central and western
Europe. We do not know the use of one particular antler tool that
has a large hole bored in one end. One suggestion is that it was
a thong-stropper used to strop or work up hide thongs (see
illustration, below); another suggestion is that it was an arrow-shaft
straightener.
Another interesting tool, usually of antler, is the spear-thrower,
which is little more than a stick with a notch or hook on one end.
The hook fits into the butt end of the spear, and the length of the
spear-thrower allows you to put much more power into the throw (p.
82). It works on pretty much the same principle as the sling.
Very fancy harpoons of antler were also made in the latter half of
the period in western Europe. These harpoons had barbs on one or both
sides and a base which would slip out of the shaft (p. 82). Some have
engraved decoration.
THE BEGINNING OF ART
[Illustration: THONG-STROPPER]
In western Europe, at least, the period saw the beginning of several
kinds of art work. It is handy to break the art down into two great
groups: the movable art, and the cave paintings and sculpture. The
movable art group includes the scratchings, engravings, and modeling
which decorate tools and weapons. Knives, stroppers, spear-throwers,
harpoons, and sometimes just plain fragments of bone or antler are
often carved. There is also a group of large flat pebbles which seem
almost to have served as sketch blocks. The surfaces of these various
objects may show animals, or rather abstract floral designs, or
geometric designs.
[Illustration: VENUS FIGURINE FROM WILLENDORF]
Some of the movable art is not done on tools. The most remarkable
examples of this class are little figures of women. These women seem to
be pregnant, and their most female characteristics are much emphasized.
It is thought that these Venus or Mother-goddess figurines may be
meant to show the great forces of nature--fertility and the birth of
life.
CAVE PAINTINGS
In the paintings on walls and ceilings of caves we have some examples
that compare with the best art of any time. The subjects were usually
animals, the great cold-weather beasts of the end of the Ice Age: the
mammoth, the wooly rhinoceros, the bison, the reindeer, the wild horse,
the bear, the wild boar, and wild cattle. As in the movable art, there
are different styles in the cave art. The really great cave art is
pretty well restricted to southern France and Cantabrian (northwestern)
Spain.
There are several interesting things about the Franco-Cantabrian cave
art. It was done deep down in the darkest and most dangerous parts of
the caves, although the men lived only in the openings of caves. If you
think what they must have had for lights--crude lamps of hollowed stone
have been found, which must have burned some kind of oil or grease,
with a matted hair or fiber wick--and of the animals that may have
lurked in the caves, youll understand the part about danger. Then,
too, were sure the pictures these people painted were not simply to be
looked at and admired, for they painted one picture right over other
pictures which had been done earlier. Clearly, it was the _act_ of
_painting_ that counted. The painter had to go way down into the most
mysterious depths of the earth and create an animal in paint. Possibly
he believed that by doing this he gained some sort of magic power over
the same kind of animal when he hunted it in the open air. It certainly
doesnt look as if he cared very much about the picture he painted--as
a finished product to be admired--for he or somebody else soon went
down and painted another animal right over the one he had done.
The cave art of the Franco-Cantabrian style is one of the great
artistic achievements of all time. The subjects drawn are almost always
the larger animals of the time: the bison, wild cattle and horses, the
wooly rhinoceros, the mammoth, the wild boar, and the bear. In some of
the best examples, the beasts are drawn in full color and the paintings
are remarkably alive and charged with energy. They come from the hands
of men who knew the great animals well--knew the feel of their fur, the
tremendous drive of their muscles, and the danger one faced when he
hunted them.
Another artistic style has been found in eastern Spain. It includes
lively drawings, often of people hunting with bow and arrow. The East
Spanish art is found on open rock faces and in rock-shelters. It is
less spectacular and apparently more recent than the Franco-Cantabrian
cave art.
LIFE AT THE END OF THE ICE AGE IN EUROPE
Life in these times was probably as good as a hunter could expect it
to be. Game and fish seem to have been plentiful; berries and wild
fruits probably were, too. From France to Russia, great pits or
piles of animal bones have been found. Some of this killing was done
as our Plains Indians killed the buffalo--by stampeding them over
steep river banks or cliffs. There were also good tools for hunting,
however. In western Europe, people lived in the openings of caves and
under overhanging rocks. On the great plains of eastern Europe, very
crude huts were being built, half underground. The first part of this
time must have been cold, for it was the middle and end phases of the
last great glaciation. Northern Europe from Scotland to Scandinavia,
northern Germany and Russia, and also the higher mountains to the
south, were certainly covered with ice. But people had fire, and the
needles and tools that were used for scraping hides must mean that they
wore clothing.
It is clear that men were thinking of a great variety of things beside
the tools that helped them get food and shelter. Such burials as we
find have more grave-gifts than before. Beads and ornaments and often
flint, bone, or antler tools are included in the grave, and sometimes
the body is sprinkled with red ochre. Red is the color of blood, which
means life, and of fire, which means heat. Professor Childe wonders if
the red ochre was a pathetic attempt at magic--to give back to the body
the heat that had gone from it. But pathetic or not, it is sure proof
that these people were already moved by death as men still are moved by
it.
Their art is another example of the direction the human mind was
taking. And when I say human, I mean it in the fullest sense, for this
is the time in which fully modern man has appeared. On page 34, we
spoke of the Cro-Magnon group and of the Combe Capelle-Brnn group of
Caucasoids and of the Grimaldi Negroids, who are no longer believed
to be Negroid. I doubt that any one of these groups produced most of
the achievements of the times. Its not yet absolutely sure which
particular group produced the great cave art. The artists were almost
certainly a blend of several (no doubt already mixed) groups. The pair
of Grimaldians were buried in a grave with a sprinkling of red ochre,
and were provided with shell beads and ornaments and with some blade
tools of flint. Regardless of the different names once given them by
the human paleontologists, each of these groups seems to have shared
equally in the cultural achievements of the times, for all that the
archeologists can say.
MICROLITHS
One peculiar set of tools seems to serve as a marker for the very last
phase of the Ice Age in southwestern Europe. This tool-making habit is
also found about the shore of the Mediterranean basin, and it moved
into northern Europe as the last glaciation pulled northward. People
began making blade tools of very small size. They learned how to chip
very slender and tiny blades from a prepared core. Then they made these
little blades into tiny triangles, half-moons (lunates), trapezoids,
and several other geometric forms. These little tools are called
microliths. They are so small that most of them must have been fixed
in handles or shafts.
[Illustration: MICROLITHS
BLADE FRAGMENT
BURIN
LUNATE
TRAPEZOID
SCALENE TRIANGLE
ARROWHEAD]
We have found several examples of microliths mounted in shafts. In
northern Europe, where their use soon spread, the microlithic triangles
or lunates were set in rows down each side of a bone or wood point.
One corner of each little triangle stuck out, and the whole thing
made a fine barbed harpoon. In historic times in Egypt, geometric
trapezoidal microliths were still in use as arrowheads. They were
fastened--broad end out--on the end of an arrow shaft. It seems queer
to give an arrow a point shaped like a T. Actually, the little points
were very sharp, and must have pierced the hides of animals very
easily. We also think that the broader cutting edge of the point may
have caused more bleeding than a pointed arrowhead would. In hunting
fleet-footed animals like the gazelle, which might run for miles after
being shot with an arrow, it was an advantage to cause as much bleeding
as possible, for the animal would drop sooner.
We are not really sure where the microliths were first invented. There
is some evidence that they appear early in the Near East. Their use
was very common in northwest Africa but this came later. The microlith
makers who reached south Russia and central Europe possibly moved up
out of the Near East. Or it may have been the other way around; we
simply dont yet know.
Remember that the microliths we are talking about here were made from
carefully prepared little blades, and are often geometric in outline.
Each microlithic industry proper was made up, in good part, of such
tiny blade tools. But there were also some normal-sized blade tools and
even some flake scrapers, in most microlithic industries. I emphasize
this bladelet and the geometric character of the microlithic industries
of the western Old World, since there has sometimes been confusion in
the matter. Sometimes small flake chips, utilized as minute pointed
tools, have been called microliths. They may be _microlithic_ in size
in terms of the general meaning of the word, but they do not seem to
belong to the sub-tradition of the blade tool preparation habits which
we have been discussing here.
LATER BLADE-TOOL INDUSTRIES OF THE NEAR EAST AND AFRICA
The blade-tool industries of normal size we talked about earlier spread
from Europe to central Siberia. We noted that blade tools were made
in western Asia too, and early, although Professor Garrod is no longer
sure that the whole tradition originated in the Near East. If you look
again at my chart (p. 72) you will note that in western Asia I list
some of the names of the western European industries, but with the
qualification -like (for example, Gravettian-like). The western
Asiatic blade-tool industries do vaguely recall some aspects of those
of western Europe, but we would probably be better off if we used
completely local names for them. The Emiran of my chart is such an
example; its industry includes a long spike-like blade point which has
no western European counterpart.
When we last spoke of Africa (p. 66), I told you that stone tools
there were continuing in the Levalloisian flake tradition, and were
becoming smaller. At some time during this process, two new tool
types appeared in northern Africa: one was the Aterian point with
a tang (p. 67), and the other was a sort of laurel leaf point,
called the Sbaikian. These two tool types were both produced from
flakes. The Sbaikian points, especially, are roughly similar to some
of the Solutrean points of Europe. It has been suggested that both the
Sbaikian and Aterian points may be seen on their way to France through
their appearance in the Spanish cave deposits of Parpallo, but there is
also a rival pre-Solutrean in central Europe. We still do not know
whether there was any contact between the makers of these north African
tools and the Solutrean tool-makers. What does seem clear is that the
blade-tool tradition itself arrived late in northern Africa.
NETHER AFRICA
Blade tools and laurel leaf points and some other probably late
stone tool types also appear in central and southern Africa. There
are geometric microliths on bladelets and even some coarse pottery in
east Africa. There is as yet no good way of telling just where these
items belong in time; in broad geological terms they are late.
Some people have guessed that they are as early as similar European
and Near Eastern examples, but I doubt it. The makers of small-sized
Levalloisian flake tools occupied much of Africa until very late in
time.
THE FAR EAST
India and the Far East still seem to be going their own way. In India,
some blade tools have been found. These are not well dated, save that
we believe they must be post-Pleistocene. In the Far East it looks as
if the old chopper-tool tradition was still continuing. For Burma,
Dr. Movius feels this is fairly certain; for China he feels even more
certain. Actually, we know very little about the Far East at about the
time of the last glaciation. This is a shame, too, as you will soon
agree.
THE NEW WORLD BECOMES INHABITED
At some time toward the end of the last great glaciation--almost
certainly after 20,000 years ago--people began to move over Bering
Strait, from Asia into America. As you know, the American Indians have
been assumed to be basically Mongoloids. New studies of blood group
types make this somewhat uncertain, but there is no doubt that the
ancestors of the American Indians came from Asia.
The stone-tool traditions of Europe, Africa, the Near and Middle East,
and central Siberia, did _not_ move into the New World. With only a
very few special or late exceptions, there are _no_ core-bifaces,
flakes, or blade tools of the Old World. Such things just havent been
found here.
This is why I say its a shame we dont know more of the end of the
chopper-tool tradition in the Far East. According to Weidenreich,
the Mongoloids were in the Far East long before the end of the last
glaciation. If the genetics of the blood group types do demand a
non-Mongoloid ancestry for the American Indians, who else may have been
in the Far East 25,000 years ago? We know a little about the habits
for making stone tools which these first people brought with them,
and these habits dont conform with those of the western Old World.
Wed better keep our eyes open for whatever happened to the end of
the chopper-tool tradition in northern China; already there are hints
that it lasted late there. Also we should watch future excavations
in eastern Siberia. Perhaps we shall find the chopper-tool tradition
spreading up that far.
THE NEW ERA
Perhaps it comes in part from the way I read the evidence and perhaps
in part it is only intuition, but I feel that the materials of this
chapter suggest a new era in the ways of life. Before about 40,000
years ago, people simply gathered their food, wandering over large
areas to scavenge or to hunt in a simple sort of way. But here we
have seen them settling-in more, perhaps restricting themselves in
their wanderings and adapting themselves to a given locality in more
intensive ways. This intensification might be suggested by the word
collecting. The ways of life we described in the earlier chapters
were food-gathering ways, but now an era of food-collecting has
begun. We shall see further intensifications of it in the next chapter.
End and PRELUDE
[Illustration]
Up to the end of the last glaciation, we prehistorians have a
relatively comfortable time schedule. The farther back we go the less
exact we can be about time and details. Elbow-room of five, ten,
even fifty or more thousands of years becomes available for us to
maneuver in as we work backward in time. But now our story has come
forward to the point where more exact methods of dating are at hand.
The radioactive carbon method reaches back into the span of the last
glaciation. There are other methods, developed by the geologists and
paleobotanists, which supplement and extend the usefulness of the
radioactive carbon dates. And, happily, as our means of being more
exact increases, our story grows more exciting. There are also more
details of culture for us to deal with, which add to the interest.
CHANGES AT THE END OF THE ICE AGE
The last great glaciation of the Ice Age was a two-part affair, with a
sub-phase at the end of the second part. In Europe the last sub-phase
of this glaciation commenced somewhere around 15,000 years ago. Then
the glaciers began to melt back, for the last time. Remember that
Professor Antevs (p. 19) isnt sure the Ice Age is over yet! This
melting sometimes went by fits and starts, and the weather wasnt
always changing for the better; but there was at least one time when
European weather was even better than it is now.
The melting back of the glaciers and the weather fluctuations caused
other changes, too. We know a fair amount about these changes in
Europe. In an earlier chapter, we said that the whole Ice Age was a
matter of continual change over long periods of time. As the last
glaciers began to melt back some interesting things happened to mankind.
In Europe, along with the melting of the last glaciers, geography
itself was changing. Britain and Ireland had certainly become islands
by 5000 B.C. The Baltic was sometimes a salt sea, sometimes a large
fresh-water lake. Forests began to grow where the glaciers had been,
and in what had once been the cold tundra areas in front of the
glaciers. The great cold-weather animals--the mammoth and the wooly
rhinoceros--retreated northward and finally died out. It is probable
that the efficient hunting of the earlier people of 20,000 or 25,000
to about 12,000 years ago had helped this process along (see p. 86).
Europeans, especially those of the post-glacial period, had to keep
changing to keep up with the times.
The archeological materials for the time from 10,000 to 6000 B.C. seem
simpler than those of the previous five thousand years. The great cave
art of France and Spain had gone; so had the fine carving in bone and
antler. Smaller, speedier animals were moving into the new forests. New
ways of hunting them, or ways of getting other food, had to be found.
Hence, new tools and weapons were necessary. Some of the people who
moved into northern Germany were successful reindeer hunters. Then the
reindeer moved off to the north, and again new sources of food had to
be found.
THE READJUSTMENTS COMPLETED IN EUROPE
After a few thousand years, things began to look better. Or at least
we can say this: By about 6000 B.C. we again get hotter archeological
materials. The best of these come from the north European area:
Britain, Belgium, Holland, Denmark, north Germany, southern Norway and
Sweden. Much of this north European material comes from bogs and swamps
where it had become water-logged and has kept very well. Thus we have
much more complete _assemblages_[4] than for any time earlier.
[4] Assemblage is a useful word when there are different kinds of
archeological materials belonging together, from one area and of
one time. An assemblage is made up of a number of industries
(that is, all the tools in chipped stone, all the tools in
bone, all the tools in wood, the traces of houses, etc.) and
everything else that manages to survive, such as the art, the
burials, the bones of the animals used as food, and the traces
of plant foods; in fact, everything that has been left to us
and can be used to help reconstruct the lives of the people to
whom it once belonged. Our own present-day assemblage would be
the sum total of all the objects in our mail-order catalogues,
department stores and supply houses of every sort, our churches,
our art galleries and other buildings, together with our roads,
canals, dams, irrigation ditches, and any other traces we might
leave of ourselves, from graves to garbage dumps. Not everything
would last, so that an archeologist digging us up--say 2,000
years from now--would find only the most durable items in our
assemblage.
The best known of these assemblages is the _Maglemosian_, named after a
great Danish peat-swamp where much has been found.
[Illustration: SKETCH OF MAGLEMOSIAN ASSEMBLAGE
CHIPPED STONE
HEMP
GROUND STONE
BONE AND ANTLER
WOOD]
In the Maglemosian assemblage the flint industry was still very
important. Blade tools, tanged arrow points, and burins were still
made, but there were also axes for cutting the trees in the new
forests. Moreover, the tiny microlithic blades, in a variety of
geometric forms, are also found. Thus, a specialized tradition that
possibly began east of the Mediterranean had reached northern Europe.
There was also a ground stone industry; some axes and club-heads were
made by grinding and polishing rather than by chipping. The industries
in bone and antler show a great variety of tools: axes, fish-hooks,
fish spears, handles and hafts for other tools, harpoons, and clubs.
A remarkable industry in wood has been preserved. Paddles, sled
runners, handles for tools, and bark floats for fish-nets have been
found. There are even fish-nets made of plant fibers. Canoes of some
kind were no doubt made. Bone and antler tools were decorated with
simple patterns, and amber was collected. Wooden bows and arrows are
found.
It seems likely that the Maglemosian bog finds are remains of summer
camps, and that in winter the people moved to higher and drier regions.
Childe calls them the Forest folk; they probably lived much the
same sort of life as did our pre-agricultural Indians of the north
central states. They hunted small game or deer; they did a great deal
of fishing; they collected what plant food they could find. In fact,
their assemblage shows us again that remarkable ability of men to adapt
themselves to change. They had succeeded in domesticating the dog; he
was still a very wolf-like dog, but his long association with mankind
had now begun. Professor Coon believes that these people were direct
descendants of the men of the glacial age and that they had much the
same appearance. He believes that most of the Ice Age survivors still
extant are living today in the northwestern European area.
SOUTH AND CENTRAL EUROPE PERHAPS AS READJUSTED AS THE NORTH
There is always one trouble with things that come from areas where
preservation is exceptionally good: The very quantity of materials in
such an assemblage tends to make things from other areas look poor
and simple, although they may not have been so originally at all. The
assemblages of the people who lived to the south of the Maglemosian
area may also have been quite large and varied; but, unfortunately,
relatively little of the southern assemblages has lasted. The
water-logged sites of the Maglemosian area preserved a great deal
more. Hence the Maglemosian itself _looks_ quite advanced to us, when
we compare it with the few things that have happened to last in other
areas. If we could go back and wander over the Europe of eight thousand
years ago, we would probably find that the peoples of France, central
Europe, and south central Russia were just as advanced as those of the
north European-Baltic belt.
South of the north European belt the hunting-food-collecting peoples
were living on as best they could during this time. One interesting
group, which seems to have kept to the regions of sandy soil and scrub
forest, made great quantities of geometric microliths. These are the
materials called _Tardenoisian_. The materials of the Forest folk of
France and central Europe generally are called _Azilian_; Dr. Movius
believes the term might best be restricted to the area south of the
Loire River.
HOW MUCH REAL CHANGE WAS THERE?
You can see that no really _basic_ change in the way of life has yet
been described. Childe sees the problem that faced the Europeans of
10,000 to 3000 B.C. as a problem in readaptation to the post-glacial
forest environment. By 6000 B.C. some quite successful solutions of
the problem--like the Maglemosian--had been made. The upsets that came
with the melting of the last ice gradually brought about all sorts of
changes in the tools and food-getting habits, but the people themselves
were still just as much simple hunters, fishers, and food-collectors as
they had been in 25,000 B.C. It could be said that they changed just
enough so that they would not have to change. But there is a bit more
to it than this.
Professor Mathiassen of Copenhagen, who knows the archeological remains
of this time very well, poses a question. He speaks of the material
as being neither rich nor progressive, in fact rather stagnant, but
he goes on to add that the people had a certain receptiveness and
were able to adapt themselves quickly when the next change did come.
My own understanding of the situation is that the Forest folk made
nothing as spectacular as had the producers of the earlier Magdalenian
assemblage and the Franco-Cantabrian art. On the other hand, they
_seem_ to have been making many more different kinds of tools for many
more different kinds of tasks than had their Ice Age forerunners. I
emphasize seem because the preservation in the Maglemosian bogs
is very complete; certainly we cannot list anywhere near as many
different things for earlier times as we did for the Maglemosians
(p. 94). I believe this experimentation with all kinds of new tools
and gadgets, this intensification of adaptiveness (p. 91), this
receptiveness, even if it is still only pointed toward hunting,
fishing, and food-collecting, is an important thing.
Remember that the only marker we have handy for the _beginning_ of
this tendency toward receptiveness and experimentation is the
little microlithic blade tools of various geometric forms. These, we
saw, began before the last ice had melted away, and they lasted on
in use for a very long time. I wish there were a better marker than
the microliths but I do not know of one. Remember, too, that as yet
we can only use the microliths as a marker in Europe and about the
Mediterranean.
CHANGES IN OTHER AREAS?
All this last section was about Europe. How about the rest of the world
when the last glaciers were melting away?
We simply dont know much about this particular time in other parts
of the world except in Europe, the Mediterranean basin and the Middle
East. People were certainly continuing to move into the New World by
way of Siberia and the Bering Strait about this time. But for the
greater part of Africa and Asia, we do not know exactly what was
happening. Some day, we shall no doubt find out; today we are without
clear information.
REAL CHANGE AND PRELUDE IN THE NEAR EAST
The appearance of the microliths and the developments made by the
Forest folk of northwestern Europe also mark an end. They show us
the terminal phase of the old food-collecting way of life. It grows
increasingly clear that at about the same time that the Maglemosian and
other Forest folk were adapting themselves to hunting, fishing, and
collecting in new ways to fit the post-glacial environment, something
completely new was being made ready in western Asia.
Unfortunately, we do not have as much understanding of the climate and
environment of the late Ice Age in western Asia as we have for most
of Europe. Probably the weather was never so violent or life quite
so rugged as it was in northern Europe. We know that the microliths
made their appearance in western Asia at least by 10,000 B.C. and
possibly earlier, marking the beginning of the terminal phase of
food-collecting. Then, gradually, we begin to see the build-up towards
the first _basic change_ in human life.
This change amounted to a revolution just as important as the
Industrial Revolution. In it, men first learned to domesticate
plants and animals. They began _producing_ their food instead of
simply gathering or collecting it. When their food-production
became reasonably effective, people could and did settle down in
village-farming communities. With the appearance of the little farming
villages, a new way of life was actually under way. Professor Childe
has good reason to speak of the food-producing revolution, for it was
indeed a revolution.
QUESTIONS ABOUT CAUSE
We do not yet know _how_ and _why_ this great revolution took place. We
are only just beginning to put the questions properly. I suspect the
answers will concern some delicate and subtle interplay between man and
nature. Clearly, both the level of culture and the natural condition of
the environment must have been ready for the great change, before the
change itself could come about.
It is going to take years of co-operative field work by both
archeologists and the natural scientists who are most helpful to them
before the _how_ and _why_ answers begin to appear. Anthropologically
trained archeologists are fascinated with the cultures of men in times
of great change. About ten or twelve thousand years ago, the general
level of culture in many parts of the world seems to have been ready
for change. In northwestern Europe, we saw that cultures changed
just enough so that they would not have to change. We linked this to
environmental changes with the coming of post-glacial times.
In western Asia, we archeologists can prove that the food-producing
revolution actually took place. We can see _the_ important consequence
of effective domestication of plants and animals in the appearance of
the settled village-farming community. And within the village-farming
community was the seed of civilization. The way in which effective
domestication of plants and animals came about, however, must also be
linked closely with the natural environment. Thus the archeologists
will not solve the _how_ and _why_ questions alone--they will need the
help of interested natural scientists in the field itself.
PRECONDITIONS FOR THE REVOLUTION
Especially at this point in our story, we must remember how culture and
environment go hand in hand. Neither plants nor animals domesticate
themselves; men domesticate them. Furthermore, men usually domesticate
only those plants and animals which are useful. There is a good
question here: What is cultural usefulness? But I shall side-step it to
save time. Men cannot domesticate plants and animals that do not exist
in the environment where the men live. Also, there are certainly some
animals and probably some plants that resist domestication, although
they might be useful.
This brings me back again to the point that _both_ the level of culture
and the natural condition of the environment--with the proper plants
and animals in it--must have been ready before domestication could
have happened. But this is precondition, not cause. Why did effective
food-production happen first in the Near East? Why did it happen
independently in the New World slightly later? Why also in the Far
East? Why did it happen at all? Why are all human beings not still
living as the Maglemosians did? These are the questions we still have
to face.
CULTURAL RECEPTIVENESS AND PROMISING ENVIRONMENTS
Until the archeologists and the natural scientists--botanists,
geologists, zoologists, and general ecologists--have spent many more
years on the problem, we shall not have full _how_ and _why_ answers. I
do think, however, that we are beginning to understand what to look for.
We shall have to learn much more of what makes the cultures of men
receptive and experimental. Did change in the environment alone
force it? Was it simply a case of Professor Toynbees challenge and
response? I cannot believe the answer is quite that simple. Were it
so simple, we should want to know why the change hadnt come earlier,
along with earlier environmental changes. We shall not know the answer,
however, until we have excavated the traces of many more cultures of
the time in question. We shall doubtless also have to learn more about,
and think imaginatively about, the simpler cultures still left today.
The mechanics of culture in general will be bound to interest us.
It will also be necessary to learn much more of the environments of
10,000 to 12,000 years ago. In which regions of the world were the
natural conditions most promising? Did this promise include plants and
animals which could be domesticated, or did it only offer new ways of
food-collecting? There is much work to do on this problem, but we are
beginning to get some general hints.
Before I begin to detail the hints we now have from western Asia, I
want to do two things. First, I shall tell you of an old theory as to
how food-production might have appeared. Second, I will bother you with
some definitions which should help us in our thinking as the story goes
on.
AN OLD THEORY AS TO THE CAUSE OF THE REVOLUTION
The idea that change would result, if the balance between nature
and culture became upset, is of course not a new one. For at least
twenty-five years, there has been a general theory as to _how_ the
food-producing revolution happened. This theory depends directly on the
idea of natural change in the environment.
The five thousand years following about 10,000 B.C. must have been
very difficult ones, the theory begins. These were the years when
the most marked melting of the last glaciers was going on. While the
glaciers were in place, the climate to the south of them must have been
different from the climate in those areas today. You have no doubt read
that people once lived in regions now covered by the Sahara Desert.
This is true; just when is not entirely clear. The theory is that
during the time of the glaciers, there was a broad belt of rain winds
south of the glaciers. These rain winds would have kept north Africa,
the Nile Valley, and the Middle East green and fertile. But when the
glaciers melted back to the north, the belt of rain winds is supposed
to have moved north too. Then the people living south and east of the
Mediterranean would have found that their water supply was drying up,
that the animals they hunted were dying or moving away, and that the
plant foods they collected were dried up and scarce.
According to the theory, all this would have been true except in the
valleys of rivers and in oases in the growing deserts. Here, in the
only places where water was left, the men and animals and plants would
have clustered. They would have been forced to live close to one
another, in order to live at all. Presently the men would have seen
that some animals were more useful or made better food than others,
and so they would have begun to protect these animals from their
natural enemies. The men would also have been forced to try new plant
foods--foods which possibly had to be prepared before they could be
eaten. Thus, with trials and errors, but by being forced to live close
to plants and animals, men would have learned to domesticate them.
THE OLD THEORY TOO SIMPLE FOR THE FACTS
This theory was set up before we really knew anything in detail about
the later prehistory of the Near and Middle East. We now know that
the facts which have been found dont fit the old theory at all well.
Also, I have yet to find an American meteorologist who feels that we
know enough about the changes in the weather pattern to say that it can
have been so simple and direct. And, of course, the glacial ice which
began melting after 12,000 years ago was merely the last sub-phase of
the last great glaciation. There had also been three earlier periods
of great alpine glaciers, and long periods of warm weather in between.
If the rain belt moved north as the glaciers melted for the last time,
it must have moved in the same direction in earlier times. Thus, the
forced neighborliness of men, plants, and animals in river valleys and
oases must also have happened earlier. Why didnt domestication happen
earlier, then?
Furthermore, it does not seem to be in the oases and river valleys
that we have our first or only traces of either food-production
or the earliest farming villages. These traces are also in the
hill-flanks of the mountains of western Asia. Our earliest sites of the
village-farmers do not seem to indicate a greatly different climate
from that which the same region now shows. In fact, everything we now
know suggests that the old theory was just too simple an explanation to
have been the true one. The only reason I mention it--beyond correcting
the ideas you may get in the general texts--is that it illustrates the
kind of thinking we shall have to do, even if it is doubtless wrong in
detail.
We archeologists shall have to depend much more than we ever have on
the natural scientists who can really help us. I can tell you this from
experience. I had the great good fortune to have on my expedition staff
in Iraq in 1954-55, a geologist, a botanist, and a zoologist. Their
studies added whole new bands of color to my spectrum of thinking about
_how_ and _why_ the revolution took place and how the village-farming
community began. But it was only a beginning; as I said earlier, we are
just now learning to ask the proper questions.
ABOUT STAGES AND ERAS
Now come some definitions, so I may describe my material more easily.
Archeologists have always loved to make divisions and subdivisions
within the long range of materials which they have found. They often
disagree violently about which particular assemblage of material
goes into which subdivision, about what the subdivisions should be
named, about what the subdivisions really mean culturally. Some
archeologists, probably through habit, favor an old scheme of Grecized
names for the subdivisions: paleolithic, mesolithic, neolithic. I
refuse to use these words myself. They have meant too many different
things to too many different people and have tended to hide some pretty
fuzzy thinking. Probably you havent even noticed my own scheme of
subdivision up to now, but Id better tell you in general what it is.
I think of the earliest great group of archeological materials, from
which we can deduce only a food-gathering way of culture, as the
_food-gathering stage_. I say stage rather than age, because it
is not quite over yet; there are still a few primitive people in
out-of-the-way parts of the world who remain in the _food-gathering
stage_. In fact, Professor Julian Steward would probably prefer to call
it a food-gathering _level_ of existence, rather than a stage. This
would be perfectly acceptable to me. I also tend to find myself using
_collecting_, rather than _gathering_, for the more recent aspects or
era of the stage, as the word collecting appears to have more sense
of purposefulness and specialization than does gathering (see p.
91).
Now, while I think we could make several possible subdivisions of the
food-gathering stage--I call my subdivisions of stages _eras_[5]--I
believe the only one which means much to us here is the last or
_terminal sub-era of food-collecting_ of the whole food-gathering
stage. The microliths seem to mark its approach in the northwestern
part of the Old World. It is really shown best in the Old World by
the materials of the Forest folk, the cultural adaptation to the
post-glacial environment in northwestern Europe. We talked about
the Forest folk at the beginning of this chapter, and I used the
Maglemosian assemblage of Denmark as an example.
[5] It is difficult to find words which have a sequence or gradation
of meaning with respect to both development and a range of time
in the past, or with a range of time from somewhere in the past
which is perhaps not yet ended. One standard Webster definition
of _stage_ is: One of the steps into which the material
development of man ... is divided. I cannot find any dictionary
definition that suggests which of the words, _stage_ or _era_,
has the meaning of a longer span of time. Therefore, I have
chosen to let my eras be shorter, and to subdivide my stages
into eras. Webster gives _era_ as: A signal stage of history,
an epoch. When I want to subdivide my eras, I find myself using
_sub-eras_. Thus I speak of the _eras_ within a _stage_ and of
the _sub-eras_ within an _era_; that is, I do so when I feel
that I really have to, and when the evidence is clear enough to
allow it.
The food-producing revolution ushers in the _food-producing stage_.
This stage began to be replaced by the _industrial stage_ only about
two hundred years ago. Now notice that my stage divisions are in terms
of technology and economics. We must think sharply to be sure that the
subdivisions of the stages, the eras, are in the same terms. This does
not mean that I think technology and economics are the only important
realms of culture. It is rather that for most of prehistoric time the
materials left to the archeologists tend to limit our deductions to
technology and economics.
Im so soon out of my competence, as conventional ancient history
begins, that I shall only suggest the earlier eras of the
food-producing stage to you. This book is about prehistory, and Im not
a universal historian.
THE TWO EARLIEST ERAS OF THE FOOD-PRODUCING STAGE
The food-producing stage seems to appear in western Asia with really
revolutionary suddenness. It is seen by the relative speed with which
the traces of new crafts appear in the earliest village-farming
community sites weve dug. It is seen by the spread and multiplication
of these sites themselves, and the remarkable growth in human
population we deduce from this increase in sites. Well look at some
of these sites and the archeological traces they yield in the next
chapter. When such village sites begin to appear, I believe we are in
the _era of the primary village-farming community_. I also believe this
is the second era of the food-producing stage.
The first era of the food-producing stage, I believe, was an _era of
incipient cultivation and animal domestication_. I keep saying I
believe because the actual evidence for this earlier era is so slight
that one has to set it up mainly by playing a hunch for it. The reason
for playing the hunch goes about as follows.
One thing we seem to be able to see, in the food-collecting era in
general, is a tendency for people to begin to settle down. This
settling down seemed to become further intensified in the terminal
era. How this is connected with Professor Mathiassens receptiveness
and the tendency to be experimental, we do not exactly know. The
evidence from the New World comes into play here as well as that from
the Old World. With this settling down in one place, the people of the
terminal era--especially the Forest folk whom we know best--began
making a great variety of new things. I remarked about this earlier in
the chapter. Dr. Robert M. Adams is of the opinion that this atmosphere
of experimentation with new tools--with new ways of collecting food--is
the kind of atmosphere in which one might expect trials at planting
and at animal domestication to have been made. We first begin to find
traces of more permanent life in outdoor camp sites, although caves
were still inhabited at the beginning of the terminal era. It is not
surprising at all that the Forest folk had already domesticated the
dog. In this sense, the whole era of food-collecting was becoming ready
and almost incipient for cultivation and animal domestication.
Northwestern Europe was not the place for really effective beginnings
in agriculture and animal domestication. These would have had to take
place in one of those natural environments of promise, where a variety
of plants and animals, each possible of domestication, was available in
the wild state. Let me spell this out. Really effective food-production
must include a variety of items to make up a reasonably well-rounded
diet. The food-supply so produced must be trustworthy, even though
the food-producing peoples themselves might be happy to supplement
it with fish and wild strawberries, just as we do when such things
are available. So, as we said earlier, part of our problem is that
of finding a region with a natural environment which includes--and
did include, some ten thousand years ago--a variety of possibly
domesticable wild plants and animals.
NUCLEAR AREAS
Now comes the last of my definitions. A region with a natural
environment which included a variety of wild plants and animals,
both possible and ready for domestication, would be a central
or core or _nuclear area_, that is, it would be when and _if_
food-production took place within it. It is pretty hard for me to
imagine food-production having ever made an independent start outside
such a nuclear area, although there may be some possible nuclear areas
in which food-production never took place (possibly in parts of Africa,
for example).
We know of several such nuclear areas. In the New World, Middle America
and the Andean highlands make up one or two; it is my understanding
that the evidence is not yet clear as to which. There seems to have
been a nuclear area somewhere in southeastern Asia, in the Malay
peninsula or Burma perhaps, connected with the early cultivation of
taro, breadfruit, the banana and the mango. Possibly the cultivation
of rice and the domestication of the chicken and of zebu cattle and
the water buffalo belong to this southeast Asiatic nuclear area. We
know relatively little about it archeologically, as yet. The nuclear
area which was the scene of the earliest experiment in effective
food-production was in western Asia. Since I know it best, I shall use
it as my example.
THE NUCLEAR NEAR EAST
The nuclear area of western Asia is naturally the one of greatest
interest to people of the western cultural tradition. Our cultural
heritage began within it. The area itself is the region of the hilly
flanks of rain-watered grass-land which build up to the high mountain
ridges of Iran, Iraq, Turkey, Syria, and Palestine. The map on page
125 indicates the region. If you have a good atlas, try to locate the
zone which surrounds the drainage basin of the Tigris and Euphrates
Rivers at elevations of from approximately 2,000 to 5,000 feet. The
lower alluvial land of the Tigris-Euphrates basin itself has very
little rainfall. Some years ago Professor James Henry Breasted called
the alluvial lands of the Tigris-Euphrates a part of the fertile
crescent. These alluvial lands are very fertile if irrigated. Breasted
was most interested in the oriental civilizations of conventional
ancient history, and irrigation had been discovered before they
appeared.
The country of hilly flanks above Breasteds crescent receives from
10 to 20 or more inches of winter rainfall each year, which is about
what Kansas has. Above the hilly-flanks zone tower the peaks and ridges
of the Lebanon-Amanus chain bordering the coast-line from Palestine
to Turkey, the Taurus Mountains of southern Turkey, and the Zagros
range of the Iraq-Iran borderland. This rugged mountain frame for our
hilly-flanks zone rises to some magnificent alpine scenery, with peaks
of from ten to fifteen thousand feet in elevation. There are several
gaps in the Mediterranean coastal portion of the frame, through which
the winters rain-bearing winds from the sea may break so as to carry
rain to the foothills of the Taurus and the Zagros.
The picture I hope you will have from this description is that of an
intermediate hilly-flanks zone lying between two regions of extremes.
The lower Tigris-Euphrates basin land is low and far too dry and hot
for agriculture based on rainfall alone; to the south and southwest, it
merges directly into the great desert of Arabia. The mountains which
lie above the hilly-flanks zone are much too high and rugged to have
encouraged farmers.
THE NATURAL ENVIRONMENT OF THE NUCLEAR NEAR EAST
The more we learn of this hilly-flanks zone that I describe, the
more it seems surely to have been a nuclear area. This is where we
archeologists need, and are beginning to get, the help of natural
scientists. They are coming to the conclusion that the natural
environment of the hilly-flanks zone today is much as it was some eight
to ten thousand years ago. There are still two kinds of wild wheat and
a wild barley, and the wild sheep, goat, and pig. We have discovered
traces of each of these at about nine thousand years ago, also traces
of wild ox, horse, and dog, each of which appears to be the probable
ancestor of the domesticated form. In fact, at about nine thousand
years ago, the two wheats, the barley, and at least the goat, were
already well on the road to domestication.
The wild wheats give us an interesting clue. They are only available
together with the wild barley within the hilly-flanks zone. While the
wild barley grows in a variety of elevations and beyond the zone,
at least one of the wild wheats does not seem to grow below the hill
country. As things look at the moment, the domestication of both the
wheats together could _only_ have taken place within the hilly-flanks
zone. Barley seems to have first come into cultivation due to its
presence as a weed in already cultivated wheat fields. There is also
a suggestion--there is still much more to learn in the matter--that
the animals which were first domesticated were most at home up in the
hilly-flanks zone in their wild state.
With a single exception--that of the dog--the earliest positive
evidence of domestication includes the two forms of wheat, the barley,
and the goat. The evidence comes from within the hilly-flanks zone.
However, it comes from a settled village proper, Jarmo (which Ill
describe in the next chapter), and is thus from the era of the primary
village-farming community. We are still without positive evidence of
domesticated grain and animals in the first era of the food-producing
stage, that of incipient cultivation and animal domestication.
THE ERA OF INCIPIENT CULTIVATION AND ANIMAL DOMESTICATION
I said above (p. 105) that my era of incipient cultivation and animal
domestication is mainly set up by playing a hunch. Although we cannot
really demonstrate it--and certainly not in the Near East--it would
be very strange for food-collectors not to have known a great deal
about the plants and animals most useful to them. They do seem to have
domesticated the dog. We can easily imagine them remembering to go
back, season after season, to a particular patch of ground where seeds
or acorns or berries grew particularly well. Most human beings, unless
they are extremely hungry, are attracted to baby animals, and many wild
pups or fawns or piglets must have been brought back alive by hunting
parties.
In this last sense, man has probably always been an incipient
cultivator and domesticator. But I believe that Adams is right in
suggesting that this would be doubly true with the experimenters of
the terminal era of food-collecting. We noticed that they also seem
to have had a tendency to settle down. Now my hunch goes that _when_
this experimentation and settling down took place within a potential
nuclear area--where a whole constellation of plants and animals
possible of domestication was available--the change was easily made.
Professor Charles A. Reed, our field colleague in zoology, agrees that
year-round settlement with plant domestication probably came before
there were important animal domestications.
INCIPIENT ERAS AND NUCLEAR AREAS
I have put this scheme into a simple chart (p. 111) with the names
of a few of the sites we are going to talk about. You will see that my
hunch means that there are eras of incipient cultivation _only_ within
nuclear areas. In a nuclear area, the terminal era of food-collecting
would probably have been quite short. I do not know for how long a time
the era of incipient cultivation and domestication would have lasted,
but perhaps for several thousand years. Then it passed on into the era
of the primary village-farming community.
Outside a nuclear area, the terminal era of food-collecting would last
for a long time; in a few out-of-the-way parts of the world, it still
hangs on. It would end in any particular place through contact with
and the spread of ideas of people who had passed on into one of the
more developed eras. In many cases, the terminal era of food-collecting
was ended by the incoming of the food-producing peoples themselves.
For example, the practices of food-production were carried into Europe
by the actual movement of some numbers of peoples (we dont know how
many) who had reached at least the level of the primary village-farming
community. The Forest folk learned food-production from them. There
was never an era of incipient cultivation and domestication proper in
Europe, if my hunch is right.
ARCHEOLOGICAL DIFFICULTIES IN SEEING THE INCIPIENT ERA
The way I see it, two things were required in order that an era of
incipient cultivation and domestication could begin. First, there had
to be the natural environment of a nuclear area, with its whole group
of plants and animals capable of domestication. This is the aspect of
the matter which weve said is directly given by nature. But it is
quite possible that such an environment with such a group of plants
and animals in it may have existed well before ten thousand years ago
in the Near East. It is also quite possible that the same promising
condition may have existed in regions which never developed into
nuclear areas proper. Here, again, we come back to the cultural factor.
I think it was that atmosphere of experimentation weve talked about
once or twice before. I cant define it for you, other than to say that
by the end of the Ice Age, the general level of many cultures was ready
for change. Ask me how and why this was so, and Ill tell you we dont
know yet, and that if we did understand this kind of question, there
would be no need for me to go on being a prehistorian!
[Illustration: POSSIBLE RELATIONSHIPS OF STAGES AND ERAS IN WESTERN
ASIA AND NORTHEASTERN AFRICA]
Now since this was an era of incipience, of the birth of new ideas,
and of experimentation, it is very difficult to see its traces
archeologically. New tools having to do with the new ways of getting
and, in fact, producing food would have taken some time to develop.
It need not surprise us too much if we cannot find hoes for planting
and sickles for reaping grain at the very beginning. We might expect
a time of making-do with some of the older tools, or with make-shift
tools, for some of the new jobs. The present-day wild cousin of the
domesticated sheep still lives in the mountains of western Asia. It has
no wool, only a fine down under hair like that of a deer, so it need
not surprise us to find neither the whorls used for spinning nor traces
of woolen cloth. It must have taken some time for a wool-bearing sheep
to develop and also time for the invention of the new tools which go
with weaving. It would have been the same with other kinds of tools for
the new way of life.
It is difficult even for an experienced comparative zoologist to tell
which are the bones of domesticated animals and which are those of
their wild cousins. This is especially so because the animal bones the
archeologists find are usually fragmentary. Furthermore, we do not have
a sort of library collection of the skeletons of the animals or an
herbarium of the plants of those times, against which the traces which
the archeologists find may be checked. We are only beginning to get
such collections for the modern wild forms of animals and plants from
some of our nuclear areas. In the nuclear area in the Near East, some
of the wild animals, at least, have already become extinct. There are
no longer wild cattle or wild horses in western Asia. We know they were
there from the finds weve made in caves of late Ice Age times, and
from some slightly later sites.
SITES WITH ANTIQUITIES OF THE INCIPIENT ERA
So far, we know only a very few sites which would suit my notion of the
incipient era of cultivation and animal domestication. I am closing
this chapter with descriptions of two of the best Near Eastern examples
I know of. You may not be satisfied that what I am able to describe
makes a full-bodied era of development at all. Remember, however, that
Ive told you Im largely playing a kind of a hunch, and also that the
archeological materials of this era will always be extremely difficult
to interpret. At the beginning of any new way of life, there will be a
great tendency for people to make-do, at first, with tools and habits
they are already used to. I would suspect that a great deal of this
making-do went on almost to the end of this era.
THE NATUFIAN, AN ASSEMBLAGE OF THE INCIPIENT ERA
The assemblage called the Natufian comes from the upper layers of a
number of caves in Palestine. Traces of its flint industry have also
turned up in Syria and Lebanon. We dont know just how old it is. I
guess that it probably falls within five hundred years either way of
about 5000 B.C.
Until recently, the people who produced the Natufian assemblage were
thought to have been only cave dwellers, but now at least three open
air Natufian sites have been briefly described. In their best-known
dwelling place, on Mount Carmel, the Natufian folk lived in the open
mouth of a large rock-shelter and on the terrace in front of it. On the
terrace, they had set at least two short curving lines of stones; but
these were hardly architecture; they seem more like benches or perhaps
the low walls of open pens. There were also one or two small clusters
of stones laid like paving, and a ring of stones around a hearth or
fireplace. One very round and regular basin-shaped depression had been
cut into the rocky floor of the terrace, and there were other less
regular basin-like depressions. In the newly reported open air sites,
there seem to have been huts with rounded corners.
Most of the finds in the Natufian layer of the Mount Carmel cave were
flints. About 80 per cent of these flint tools were microliths made
by the regular working of tiny blades into various tools, some having
geometric forms. The larger flint tools included backed blades, burins,
scrapers, a few arrow points, some larger hacking or picking tools, and
one special type. This last was the sickle blade.
We know a sickle blade of flint when we see one, because of a strange
polish or sheen which seems to develop on the cutting edge when the
blade has been used to cut grasses or grain, or--perhaps--reeds. In
the Natufian, we have even found the straight bone handles in which a
number of flint sickle blades were set in a line.
There was a small industry in ground or pecked stone (that is, abraded
not chipped) in the Natufian. This included some pestle and mortar
fragments. The mortars are said to have a deep and narrow hole,
and some of the pestles show traces of red ochre. We are not sure
that these mortars and pestles were also used for grinding food. In
addition, there were one or two bits of carving in stone.
NATUFIAN ANTIQUITIES IN OTHER MATERIALS; BURIALS AND PEOPLE
The Natufian industry in bone was quite rich. It included, beside the
sickle hafts mentioned above, points and harpoons, straight and curved
types of fish-hooks, awls, pins and needles, and a variety of beads and
pendants. There were also beads and pendants of pierced teeth and shell.
A number of Natufian burials have been found in the caves; some burials
were grouped together in one grave. The people who were buried within
the Mount Carmel cave were laid on their backs in an extended position,
while those on the terrace seem to have been flexed (placed in their
graves in a curled-up position). This may mean no more than that it was
easier to dig a long hole in cave dirt than in the hard-packed dirt of
the terrace. The people often had some kind of object buried with them,
and several of the best collections of beads come from the burials. On
two of the skulls there were traces of elaborate head-dresses of shell
beads.
[Illustration: SKETCH OF NATUFIAN ASSEMBLAGE
MICROLITHS
ARCHITECTURE?
BURIAL
CHIPPED STONE
GROUND STONE
BONE]
The animal bones of the Natufian layers show beasts of a modern type,
but with some differences from those of present-day Palestine. The
bones of the gazelle far outnumber those of the deer; since gazelles
like a much drier climate than deer, Palestine must then have had much
the same climate that it has today. Some of the animal bones were those
of large or dangerous beasts: the hyena, the bear, the wild boar,
and the leopard. But the Natufian people may have had the help of a
large domesticated dog. If our guess at a date for the Natufian is
right (about 7750 B.C.), this is an earlier dog than was that in the
Maglemosian of northern Europe. More recently, it has been reported
that a domesticated goat is also part of the Natufian finds.
The study of the human bones from the Natufian burials is not yet
complete. Until Professor McCowns study becomes available, we may note
Professor Coons assessment that these people were of a basically
Mediterranean type.
THE KARIM SHAHIR ASSEMBLAGE
Karim Shahir differs from the Natufian sites in that it shows traces
of a temporary open site or encampment. It lies on the top of a bluff
in the Kurdish hill-country of northeastern Iraq. It was dug by Dr.
Bruce Howe of the expedition I directed in 1950-51 for the Oriental
Institute and the American Schools of Oriental Research. In 1954-55,
our expedition located another site, Mlefaat, with general resemblance
to Karim Shahir, but about a hundred miles north of it. In 1956, Dr.
Ralph Solecki located still another Karim Shahir type of site called
Zawi Chemi Shanidar. The Zawi Chemi site has a radiocarbon date of 8900
300 B.C.
Karim Shahir has evidence of only one very shallow level of occupation.
It was probably not lived on very long, although the people who lived
on it spread out over about three acres of area. In spots, the single
layer yielded great numbers of fist-sized cracked pieces of limestone,
which had been carried up from the bed of a stream at the bottom of the
bluff. We think these cracked stones had something to do with a kind of
architecture, but we were unable to find positive traces of hut plans.
At Mlefaat and Zawi Chemi, there were traces of rounded hut plans.
As in the Natufian, the great bulk of small objects of the Karim Shahir
assemblage was in chipped flint. A large proportion of the flint tools
were microlithic bladelets and geometric forms. The flint sickle blade
was almost non-existent, being far scarcer than in the Natufian. The
people of Karim Shahir did a modest amount of work in the grinding of
stone; there were milling stone fragments of both the mortar and the
quern type, and stone hoes or axes with polished bits. Beads, pendants,
rings, and bracelets were made of finer quality stone. We found a few
simple points and needles of bone, and even two rather formless unbaked
clay figurines which seemed to be of animal form.
[Illustration: SKETCH OF KARIM SHAHIR ASSEMBLAGE
CHIPPED STONE
GROUND STONE
UNBAKED CLAY
SHELL
BONE
ARCHITECTURE]
Karim Shahir did not yield direct evidence of the kind of vegetable
food its people ate. The animal bones showed a considerable
increase in the proportion of the bones of the species capable of
domestication--sheep, goat, cattle, horse, dog--as compared with animal
bones from the earlier cave sites of the area, which have a high
proportion of bones of wild forms like deer and gazelle. But we do not
know that any of the Karim Shahir animals were actually domesticated.
Some of them may have been, in an incipient way, but we have no means
at the moment that will tell us from the bones alone.
WERE THE NATUFIAN AND KARIM SHAHIR PEOPLES FOOD-PRODUCERS?
It is clear that a great part of the food of the Natufian people
must have been hunted or collected. Shells of land, fresh-water, and
sea animals occur in their cave layers. The same is true as regards
Karim Shahir, save for sea shells. But on the other hand, we have
the sickles, the milling stones, the possible Natufian dog, and the
goat, and the general animal situation at Karim Shahir to hint at an
incipient approach to food-production. At Karim Shahir, there was the
tendency to settle down out in the open; this is echoed by the new
reports of open air Natufian sites. The large number of cracked stones
certainly indicates that it was worth the peoples while to have some
kind of structure, even if the site as a whole was short-lived.
It is a part of my hunch that these things all point toward
food-production--that the hints we seek are there. But in the sense
that the peoples of the era of the primary village-farming community,
which we shall look at next, are fully food-producing, the Natufian
and Karim Shahir folk had not yet arrived. I think they were part of
a general build-up to full scale food-production. They were possibly
controlling a few animals of several kinds and perhaps one or two
plants, without realizing the full possibilities of this control as a
new way of life.
This is why I think of the Karim Shahir and Natufian folk as being at
a level, or in an era, of incipient cultivation and domestication. But
we shall have to do a great deal more excavation in this range of time
before well get the kind of positive information we need.
SUMMARY
I am sorry that this chapter has had to be so much more about ideas
than about the archeological traces of prehistoric men themselves.
But the antiquities of the incipient era of cultivation and animal
domestication will not be spectacular, even when we do have them
excavated in quantity. Few museums will be interested in these
antiquities for exhibition purposes. The charred bits or impressions
of plants, the fragments of animal bone and shell, and the varied
clues to climate and environment will be as important as the artifacts
themselves. It will be the ideas to which these traces lead us that
will be important. I am sure that this unspectacular material--when we
have much more of it, and learn how to understand what it says--will
lead us to how and why answers about the first great change in human
history.
We know the earliest village-farming communities appeared in western
Asia, in a nuclear area. We do not yet know why the Near Eastern
experiment came first, or why it didnt happen earlier in some other
nuclear area. Apparently, the level of culture and the promise of the
natural environment were ready first in western Asia. The next sites
we look at will show a simple but effective food-production already
in existence. Without effective food-production and the settled
village-farming communities, civilization never could have followed.
How effective food-production came into being by the end of the
incipient era, is, I believe, one of the most fascinating questions any
archeologist could face.
It now seems probable--from possibly two of the Palestinian sites with
varieties of the Natufian (Jericho and Nahal Oren)--that there were
one or more local Palestinian developments out of the Natufian into
later times. In the same way, what followed after the Karim Shahir type
of assemblage in northeastern Iraq was in some ways a reflection of
beginnings made at Karim Shahir and Zawi Chemi.
THE First Revolution
[Illustration]
As the incipient era of cultivation and animal domestication passed
onward into the era of the primary village-farming community, the first
basic change in human economy was fully achieved. In southwestern Asia,
this seems to have taken place about nine thousand years ago. I am
going to restrict my description to this earliest Near Eastern case--I
do not know enough about the later comparable experiments in the Far
East and in the New World. Let us first, once again, think of the
contrast between food-collecting and food-producing as ways of life.
THE DIFFERENCE BETWEEN FOOD-COLLECTORS AND FOOD-PRODUCERS
Childe used the word revolution because of the radical change that
took place in the habits and customs of man. Food-collectors--that is,
hunters, fishers, berry- and nut-gatherers--had to live in small groups
or bands, for they had to be ready to move wherever their food supply
moved. Not many people can be fed in this way in one area, and small
children and old folks are a burden. There is not enough food to store,
and it is not the kind that can be stored for long.
Do you see how this all fits into a picture? Small groups of people
living now in this cave, now in that--or out in the open--as they moved
after the animals they hunted; no permanent villages, a few half-buried
huts at best; no breakable utensils; no pottery; no signs of anything
for clothing beyond the tools that were probably used to dress the
skins of animals; no time to think of much of anything but food and
protection and disposal of the dead when death did come: an existence
which takes nature as it finds it, which does little or nothing to
modify nature--all in all, a savages existence, and a very tough one.
A man who spends his whole life following animals just to kill them to
eat, or moving from one berry patch to another, is really living just
like an animal himself.
THE FOOD-PRODUCING ECONOMY
Against this picture let me try to draw another--that of mans life
after food-production had begun. His meat was stored on the hoof,
his grain in silos or great pottery jars. He lived in a house: it was
worth his while to build one, because he couldnt move far from his
fields and flocks. In his neighborhood enough food could be grown
and enough animals bred so that many people were kept busy. They all
lived close to their flocks and fields, in a village. The village was
already of a fair size, and it was growing, too. Everybody had more to
eat; they were presumably all stronger, and there were more children.
Children and old men could shepherd the animals by day or help with
the lighter work in the fields. After the crops had been harvested the
younger men might go hunting and some of them would fish, but the food
they brought in was only an addition to the food in the village; the
villagers wouldnt starve, even if the hunters and fishermen came home
empty-handed.
There was more time to do different things, too. They began to modify
nature. They made pottery out of raw clay, and textiles out of hair
or fiber. People who became good at pottery-making traded their pots
for food and spent all of their time on pottery alone. Other people
were learning to weave cloth or to make new tools. There were already
people in the village who were becoming full-time craftsmen.
Other things were changing, too. The villagers must have had
to agree on new rules for living together. The head man of the
village had problems different from those of the chief of the small
food-collectors band. If somebodys flock of sheep spoiled a wheat
field, the owner wanted payment for the grain he lost. The chief of
the hunters was never bothered with such questions. Even the gods
had changed. The spirits and the magic that had been used by hunters
werent of any use to the villagers. They needed gods who would watch
over the fields and the flocks, and they eventually began to erect
buildings where their gods might dwell, and where the men who knew most
about the gods might live.
WAS FOOD-PRODUCTION A REVOLUTION?
If you can see the difference between these two pictures--between
life in the food-collecting stage and life after food-production
had begun--youll see why Professor Childe speaks of a revolution.
By revolution, he doesnt mean that it happened over night or that
it happened only once. We dont know exactly how long it took. Some
people think that all these changes may have occurred in less than
500 years, but I doubt that. The incipient era was probably an affair
of some duration. Once the level of the village-farming community had
been established, however, things did begin to move very fast. By
six thousand years ago, the descendants of the first villagers had
developed irrigation and plow agriculture in the relatively rainless
Mesopotamian alluvium and were living in towns with temples. Relative
to the half million years of food-gathering which lay behind, this had
been achieved with truly revolutionary suddenness.
GAPS IN OUR KNOWLEDGE OF THE NEAR EAST
If youll look again at the chart (p. 111) youll see that I have
very few sites and assemblages to name in the incipient era of
cultivation and domestication, and not many in the earlier part of
the primary village-farming level either. Thanks in no small part
to the intelligent co-operation given foreign excavators by the
Iraq Directorate General of Antiquities, our understanding of the
sequence in Iraq is growing more complete. I shall use Iraq as my main
yard-stick here. But I am far from being able to show you a series of
Sears Roebuck catalogues, even century by century, for any part of
the nuclear area. There is still a great deal of earth to move, and a
great mass of material to recover and interpret before we even begin to
understand how and why.
Perhaps here, because this kind of archeology is really my specialty,
youll excuse it if I become personal for a moment. I very much look
forward to having further part in closing some of the gaps in knowledge
of the Near East. This is not, as Ive told you, the spectacular
range of Near Eastern archeology. There are no royal tombs, no gold,
no great buildings or sculpture, no writing, in fact nothing to
excite the normal museum at all. Nevertheless it is a range which,
idea-wise, gives the archeologist tremendous satisfaction. The country
of the hilly flanks is an exciting combination of green grasslands
and mountainous ridges. The Kurds, who inhabit the part of the area
in which Ive worked most recently, are an extremely interesting and
hospitable people. Archeologists dont become rich, but Ill forego
the Cadillac for any bright spring morning in the Kurdish hills, on a
good site with a happy crew of workmen and an interested and efficient
staff. It is probably impossible to convey the full feeling which life
on such a dig holds--halcyon days for the body and acute pleasurable
stimulation for the mind. Old things coming newly out of the good dirt,
and the pieces of the human puzzle fitting into place! I think I am
an honest man; I cannot tell you that I am sorry the job is not yet
finished and that there are still gaps in this part of the Near Eastern
archeological sequence.
EARLIEST SITES OF THE VILLAGE FARMERS
So far, the Karim Shahir type of assemblage, which we looked at in the
last chapter, is the earliest material available in what I take to
be the nuclear area. We do not believe that Karim Shahir was a village
site proper: it looks more like the traces of a temporary encampment.
Two caves, called Belt and Hotu, which are outside the nuclear area
and down on the foreshore of the Caspian Sea, have been excavated
by Professor Coon. These probably belong in the later extension of
the terminal era of food-gathering; in their upper layers are traits
like the use of pottery borrowed from the more developed era of the
same time in the nuclear area. The same general explanation doubtless
holds true for certain materials in Egypt, along the upper Nile and in
the Kharga oasis: these materials, called Sebilian III, the Khartoum
neolithic, and the Khargan microlithic, are from surface sites,
not from caves. The chart (p. 111) shows where I would place these
materials in era and time.
[Illustration: THE HILLY FLANKS OF THE CRESCENT AND EARLY SITES OF THE
NEAR EAST]
Both Mlefaat and Dr. Soleckis Zawi Chemi Shanidar site appear to have
been slightly more settled in than was Karim Shahir itself. But I do
not think they belong to the era of farming-villages proper. The first
site of this era, in the hills of Iraqi Kurdistan, is Jarmo, on which
we have spent three seasons of work. Following Jarmo comes a variety of
sites and assemblages which lie along the hilly flanks of the crescent
and just below it. I am going to describe and illustrate some of these
for you.
Since not very much archeological excavation has yet been done on sites
of this range of time, I shall have to mention the names of certain
single sites which now alone stand for an assemblage. This does not
mean that I think the individual sites I mention were unique. In the
times when their various cultures flourished, there must have been
many little villages which shared the same general assemblage. We are
only now beginning to locate them again. Thus, if I speak of Jarmo,
or Jericho, or Sialk as single examples of their particular kinds of
assemblages, I dont mean that they were unique at all. I think I could
take you to the sites of at least three more Jarmos, within twenty
miles of the original one. They are there, but they simply havent yet
been excavated. In 1956, a Danish expedition discovered material of
Jarmo type at Shimshara, only two dozen miles northeast of Jarmo, and
below an assemblage of Hassunan type (which I shall describe presently).
THE GAP BETWEEN KARIM SHAHIR AND JARMO
As we see the matter now, there is probably still a gap in the
available archeological record between the Karim Shahir-Mlefaat-Zawi
Chemi group (of the incipient era) and that of Jarmo (of the
village-farming era). Although some items of the Jarmo type materials
do reflect the beginnings of traditions set in the Karim Shahir group
(see p. 120), there is not a clear continuity. Moreover--to the
degree that we may trust a few radiocarbon dates--there would appear
to be around two thousand years of difference in time. The single
available Zawi Chemi date is 8900 300 B.C.; the most reasonable
group of dates from Jarmo average to about 6750 200 B.C. I am
uncertain about this two thousand years--I do not think it can have
been so long.
This suggests that we still have much work to do in Iraq. You can
imagine how earnestly we await the return of political stability in the
Republic of Iraq.
JARMO, IN THE KURDISH HILLS, IRAQ
The site of Jarmo has a depth of deposit of about twenty-seven feet,
and approximately a dozen layers of architectural renovation and
change. Nevertheless it is a one period site: its assemblage remains
essentially the same throughout, although one or two new items are
added in later levels. It covers about four acres of the top of a
bluff, below which runs a small stream. Jarmo lies in the hill country
east of the modern oil town of Kirkuk. The Iraq Directorate General of
Antiquities suggested that we look at it in 1948, and we have had three
seasons of digging on it since.
The people of Jarmo grew the barley plant and two different kinds of
wheat. They made flint sickles with which to reap their grain, mortars
or querns on which to crack it, ovens in which it might be parched, and
stone bowls out of which they might eat their porridge. We are sure
that they had the domesticated goat, but Professor Reed (the staff
zoologist) is not convinced that the bones of the other potentially
domesticable animals of Jarmo--sheep, cattle, pig, horse, dog--show
sure signs of domestication. We had first thought that all of these
animals were domesticated ones, but Reed feels he must find out much
more before he can be sure. As well as their grain and the meat from
their animals, the people of Jarmo consumed great quantities of land
snails. Botanically, the Jarmo wheat stands about half way between
fully bred wheat and the wild forms.
ARCHITECTURE: HALL-MARK OF THE VILLAGE
The sure sign of the village proper is in its traces of architectural
permanence. The houses of Jarmo were only the size of a small cottage
by our standards, but each was provided with several rectangular rooms.
The walls of the houses were made of puddled mud, often set on crude
foundations of stone. (The puddled mud wall, which the Arabs call
_touf_, is built by laying a three to six inch course of soft mud,
letting this sun-dry for a day or two, then adding the next course,
etc.) The village probably looked much like the simple Kurdish farming
village of today, with its mud-walled houses and low mud-on-brush
roofs. I doubt that the Jarmo village had more than twenty houses at
any one moment of its existence. Today, an average of about seven
people live in a comparable Kurdish house; probably the population of
Jarmo was about 150 people.
[Illustration: SKETCH OF JARMO ASSEMBLAGE
CHIPPED STONE
UNBAKED CLAY
GROUND STONE
POTTERY _UPPER THIRD OF SITE ONLY._
REED MATTING
BONE
ARCHITECTURE]
It is interesting that portable pottery does not appear until the
last third of the life of the Jarmo village. Throughout the duration
of the village, however, its people had experimented with the plastic
qualities of clay. They modeled little figurines of animals and of
human beings in clay; one type of human figurine they favored was that
of a markedly pregnant woman, probably the expression of some sort of
fertility spirit. They provided their house floors with baked-in-place
depressions, either as basins or hearths, and later with domed ovens of
clay. As weve noted, the houses themselves were of clay or mud; one
could almost say they were built up like a house-sized pot. Then,
finally, the idea of making portable pottery itself appeared, although
I very much doubt that the people of the Jarmo village discovered the
art.
On the other hand, the old tradition of making flint blades and
microlithic tools was still very strong at Jarmo. The sickle-blade was
made in quantities, but so also were many of the much older tool types.
Strangely enough, it is within this age-old category of chipped stone
tools that we see one of the clearest pointers to a newer age. Many of
the Jarmo chipped stone tools--microliths--were made of obsidian, a
black volcanic natural glass. The obsidian beds nearest to Jarmo are
over three hundred miles to the north. Already a bulk carrying trade
had been established--the forerunner of commerce--and the routes were
set by which, in later times, the metal trade was to move.
There are now twelve radioactive carbon dates from Jarmo. The most
reasonable cluster of determinations averages to about 6750 200
B.C., although there is a completely unreasonable range of dates
running from 3250 to 9250 B.C.! _If_ I am right in what I take to be
reasonable, the first flush of the food-producing revolution had been
achieved almost nine thousand years ago.
HASSUNA, IN UPPER MESOPOTAMIAN IRAQ
We are not sure just how soon after Jarmo the next assemblage of Iraqi
material is to be placed. I do not think the time was long, and there
are a few hints that detailed habits in the making of pottery and
ground stone tools were actually continued from Jarmo times into the
time of the next full assemblage. This is called after a site named
Hassuna, a few miles to the south and west of modern Mosul. We also
have Hassunan type materials from several other sites in the same
general region. It is probably too soon to make generalizations about
it, but the Hassunan sites seem to cluster at slightly lower elevations
than those we have been talking about so far.
The catalogue of the Hassuna assemblage is of course more full and
elaborate than that of Jarmo. The Iraqi governments archeologists
who dug Hassuna itself, exposed evidence of increasing architectural
know-how. The walls of houses were still formed of puddled mud;
sun-dried bricks appear only in later periods. There were now several
different ways of making and decorating pottery vessels. One style of
pottery painting, called the Samarran style, is an extremely handsome
one and must have required a great deal of concentration and excellence
of draftsmanship. On the other hand, the old habits for the preparation
of good chipped stone tools--still apparent at Jarmo--seem to have
largely disappeared by Hassunan times. The flint work of the Hassunan
catalogue is, by and large, a wretched affair. We might guess that the
kinaesthetic concentration of the Hassuna craftsmen now went into other
categories; that is, they suddenly discovered they might have more fun
working with the newer materials. Its a shame, for example, that none
of their weaving is preserved for us.
The two available radiocarbon determinations from Hassunan contexts
stand at about 5100 and 5600 B.C. 250 years.
OTHER EARLY VILLAGE SITES IN THE NUCLEAR AREA
Ill now name and very briefly describe a few of the other early
village assemblages either in or adjacent to the hilly flanks of the
crescent. Unfortunately, we do not have radioactive carbon dates for
many of these materials. We may guess that some particular assemblage,
roughly comparable to that of Hassuna, for example, must reflect a
culture which lived at just about the same time as that of Hassuna. We
do this guessing on the basis of the general similarity and degree of
complexity of the Sears Roebuck catalogues of the particular assemblage
and that of Hassuna. We suppose that for sites near at hand and of a
comparable cultural level, as indicated by their generally similar
assemblages, the dating must be about the same. We may also know that
in a general stratigraphic sense, the sites in question may both appear
at the bottom of the ascending village sequence in their respective
areas. Without a number of consistent radioactive carbon dates, we
cannot be precise about priorities.
[Illustration: SKETCH OF HASSUNA ASSEMBLAGE
POTTERY
POTTERY OBJECTS
CHIPPED STONE
BONE
GROUND STONE
ARCHITECTURE
REED MATTING
BURIAL]
The ancient mound at Jericho, in the Dead Sea valley in Palestine,
yields some very interesting material. Its catalogue somewhat resembles
that of Jarmo, especially in the sense that there is a fair depth
of deposit without portable pottery vessels. On the other hand, the
architecture of Jericho is surprisingly complex, with traces of massive
stone fortification walls and the general use of formed sun-dried
mud brick. Jericho lies in a somewhat strange and tropically lush
ecological niche, some seven hundred feet below sea level; it is
geographically within the hilly-flanks zone but environmentally not
part of it.
Several radiocarbon dates for Jericho fall within the range of those
I find reasonable for Jarmo, and their internal statistical consistency
is far better than that for the Jarmo determinations. It is not yet
clear exactly what this means.
The mound at Jericho (Tell es-Sultan) contains a remarkably
fine sequence, which perhaps does not have the gap we noted in
Iraqi-Kurdistan between the Karim Shahir group and Jarmo. While I am
not sure that the Jericho sequence will prove valid for those parts
of Palestine outside the special Dead Sea environmental niche, the
sequence does appear to proceed from the local variety of Natufian into
that of a very well settled community. So far, we have little direct
evidence for the food-production basis upon which the Jericho people
subsisted.
There is an early village assemblage with strong characteristics of its
own in the land bordering the northeast corner of the Mediterranean
Sea, where Syria and the Cilician province of Turkey join. This early
Syro-Cilician assemblage must represent a general cultural pattern
which was at least in part contemporary with that of the Hassuna
assemblage. These materials from the bases of the mounds at Mersin, and
from Judaidah in the Amouq plain, as well as from a few other sites,
represent the remains of true villages. The walls of their houses were
built of puddled mud, but some of the house foundations were of stone.
Several different kinds of pottery were made by the people of these
villages. None of it resembles the pottery from Hassuna or from the
upper levels of Jarmo or Jericho. The Syro-Cilician people had not
lost their touch at working flint. An important southern variation of
the Syro-Cilician assemblage has been cleared recently at Byblos, a
port town famous in later Phoenician times. There are three radiocarbon
determinations which suggest that the time range for these developments
was in the sixth or early fifth millennium B.C.
It would be fascinating to search for traces of even earlier
village-farming communities and for the remains of the incipient
cultivation era, in the Syro-Cilician region.
THE IRANIAN PLATEAU AND THE NILE VALLEY
The map on page 125 shows some sites which lie either outside or in
an extension of the hilly-flanks zone proper. From the base of the
great mound at Sialk on the Iranian plateau came an assemblage of
early village material, generally similar, in the kinds of things it
contained, to the catalogues of Hassuna and Judaidah. The details of
how things were made are different; the Sialk assemblage represents
still another cultural pattern. I suspect it appeared a bit later
in time than did that of Hassuna. There is an important new item in
the Sialk catalogue. The Sialk people made small drills or pins of
hammered copper. Thus the metallurgists specialized craft had made its
appearance.
There is at least one very early Iranian site on the inward slopes
of the hilly-flanks zone. It is the earlier of two mounds at a place
called Bakun, in southwestern Iran; the results of the excavations
there are not yet published and we only know of its coarse and
primitive pottery. I only mention Bakun because it helps us to plot the
extent of the hilly-flanks zone villages on the map.
The Nile Valley lies beyond the peculiar environmental zone of the
hilly flanks of the crescent, and it is probable that the earliest
village-farming communities in Egypt were established by a few people
who wandered into the Nile delta area from the nuclear area. The
assemblage which is most closely comparable to the catalogue of Hassuna
or Judaidah, for example, is that from little settlements along the
shore of the Fayum lake. The Fayum materials come mainly from grain
bins or silos. Another site, Merimde, in the western part of the Nile
delta, shows the remains of a true village, but it may be slightly
later than the settlement of the Fayum. There are radioactive carbon
dates for the Fayum materials at about 4275 B.C. 320 years, which
is almost fifteen hundred years later than the determinations suggested
for the Hassunan or Syro-Cilician assemblages. I suspect that this
is a somewhat over-extended indication of the time it took for the
generalized cultural pattern of village-farming community life to
spread from the nuclear area down into Egypt, but as yet we have no way
of testing these matters.
In this same vein, we have two radioactive carbon dates for an
assemblage from sites near Khartoum in the Sudan, best represented by
the mound called Shaheinab. The Shaheinab catalogue roughly corresponds
to that of the Fayum; the distance between the two places, as the Nile
flows, is roughly 1,500 miles. Thus it took almost a thousand years for
the new way of life to be carried as far south into Africa as Khartoum;
the two Shaheinab dates average about 3300 B.C. 400 years.
If the movement was up the Nile (southward), as these dates suggest,
then I suspect that the earliest available village material of middle
Egypt, the so-called Tasian, is also later than that of the Fayum. The
Tasian materials come from a few graves near a village called Deir
Tasa, and I have an uncomfortable feeling that the Tasian assemblage
may be mainly an artificial selection of poor examples of objects which
belong in the following range of time.
SPREAD IN TIME AND SPACE
There are now two things we can do; in fact, we have already begun to
do them. We can watch the spread of the new way of life upward through
time in the nuclear area. We can also see how the new way of life
spread outward in space from the nuclear area, as time went on. There
is good archeological evidence that both these processes took place.
For the hill country of northeastern Iraq, in the nuclear area, we
have already noticed how the succession (still with gaps) from Karim
Shahir, through Mlefaat and Jarmo, to Hassuna can be charted (see
chart, p. 111). In the next chapter, we shall continue this charting
and description of what happened in Iraq upward through time. We also
watched traces of the new way of life move through space up the Nile
into Africa, to reach Khartoum in the Sudan some thirty-five hundred
years later than we had seen it at Jarmo or Jericho. We caught glimpses
of it in the Fayum and perhaps at Tasa along the way.
For the remainder of this chapter, I shall try to suggest briefly for
you the directions taken by the spread of the new way of life from the
nuclear area in the Near East. First, let me make clear again that
I _do not_ believe that the village-farming community way of life
was invented only once and in the Near East. It seems to me that the
evidence is very clear that a separate experiment arose in the New
World. For China, the question of independence or borrowing--in the
appearance of the village-farming community there--is still an open
one. In the last chapter, we noted the probability of an independent
nuclear area in southeastern Asia. Professor Carl Sauer strongly
champions the great importance of this area as _the_ original center
of agricultural pursuits, as a kind of cradle of all incipient eras
of the Old World at least. While there is certainly not the slightest
archeological evidence to allow us to go that far, we may easily expect
that an early southeast Asian development would have been felt in
China. However, the appearance of the village-farming community in the
northwest of India, at least, seems to have depended on the earlier
development in the Near East. It is also probable that ideas of the new
way of life moved well beyond Khartoum in Africa.
THE SPREAD OF THE VILLAGE-FARMING COMMUNITY WAY OF LIFE INTO EUROPE
How about Europe? I wont give you many details. You can easily imagine
that the late prehistoric prelude to European history is a complicated
affair. We all know very well how complicated an area Europe is now,
with its welter of different languages and cultures. Remember, however,
that a great deal of archeology has been done on the late prehistory of
Europe, and very little on that of further Asia and Africa. If we knew
as much about these areas as we do of Europe, I expect wed find them
just as complicated.
This much is clear for Europe, as far as the spread of the
village-community way of life is concerned. The general idea and much
of the know-how and the basic tools of food-production moved from the
Near East to Europe. So did the plants and animals which had been
domesticated; they were not naturally at home in Europe, as they were
in western Asia. I do not, of course, mean that there were traveling
salesmen who carried these ideas and things to Europe with a commercial
gleam in their eyes. The process took time, and the ideas and things
must have been passed on from one group of people to the next. There
was also some actual movement of peoples, but we dont know the size of
the groups that moved.
The story of the colonization of Europe by the first farmers is
thus one of (1) the movement from the eastern Mediterranean lands
of some people who were farmers; (2) the spread of ideas and things
beyond the Near East itself and beyond the paths along which the
colonists moved; and (3) the adaptations of the ideas and things
by the indigenous Forest folk, about whose receptiveness Professor
Mathiassen speaks (p. 97). It is important to note that the resulting
cultures in the new European environment were European, not Near
Eastern. The late Professor Childe remarked that the peoples of the
West were not slavish imitators; they adapted the gifts from the East
... into a new and organic whole capable of developing on its own
original lines.
THE WAYS TO EUROPE
Suppose we want to follow the traces of those earliest village-farmers
who did travel from western Asia into Europe. Let us start from
Syro-Cilicia, that part of the hilly-flanks zone proper which lies in
the very northeastern corner of the Mediterranean. Three ways would be
open to us (of course we could not be worried about permission from the
Soviet authorities!). We would go north, or north and slightly east,
across Anatolian Turkey, and skirt along either shore of the Black Sea
or even to the east of the Caucasus Mountains along the Caspian Sea,
to reach the plains of Ukrainian Russia. From here, we could march
across eastern Europe to the Baltic and Scandinavia, or even hook back
southwestward to Atlantic Europe.
Our second way from Syro-Cilicia would also lie over Anatolia, to the
northwest, where we would have to swim or raft ourselves over the
Dardanelles or the Bosphorus to the European shore. Then we would bear
left toward Greece, but some of us might turn right again in Macedonia,
going up the valley of the Vardar River to its divide and on down
the valley of the Morava beyond, to reach the Danube near Belgrade
in Jugoslavia. Here we would turn left, following the great river
valley of the Danube up into central Europe. We would have a number of
tributary valleys to explore, or we could cross the divide and go down
the valley of the Rhine to the North Sea.
Our third way from Syro-Cilicia would be by sea. We would coast along
southern Anatolia and visit Cyprus, Crete, and the Aegean islands on
our way to Greece, where, in the north, we might meet some of those who
had taken the second route. From Greece, we would sail on to Italy and
the western isles, to reach southern France and the coasts of Spain.
Eventually a few of us would sail up the Atlantic coast of Europe, to
reach western Britain and even Ireland.
[Illustration: PROBABLE ROUTES AND TIMING IN THE SPREAD OF THE
VILLAGE-FARMING COMMUNITY WAY OF LIFE FROM THE NEAR EAST TO EUROPE]
Of course none of us could ever take these journeys as the first
farmers took them, since the whole course of each journey must have
lasted many lifetimes. The date given to the assemblage called Windmill
Hill, the earliest known trace of village-farming communities in
England, is about 2500 B.C. I would expect about 5500 B.C. to be a
safe date to give for the well-developed early village communities of
Syro-Cilicia. We suspect that the spread throughout Europe did not
proceed at an even rate. Professor Piggott writes that at a date
probably about 2600 B.C., simple agricultural communities were being
established in Spain and southern France, and from the latter region a
spread northwards can be traced ... from points on the French seaboard
of the [English] Channel ... there were emigrations of a certain number
of these tribes by boat, across to the chalk lands of Wessex and Sussex
[in England], probably not more than three or four generations later
than the formation of the south French colonies.
New radiocarbon determinations are becoming available all the
time--already several suggest that the food-producing way of life
had reached the lower Rhine and Holland by 4000 B.C. But not all
prehistorians accept these dates, so I do not show them on my map
(p. 139).
THE EARLIEST FARMERS OF ENGLAND
To describe the later prehistory of all Europe for you would take
another book and a much larger one than this is. Therefore, I have
decided to give you only a few impressions of the later prehistory of
Britain. Of course the British Isles lie at the other end of Europe
from our base-line in western Asia. Also, they received influences
along at least two of the three ways in which the new way of life
moved into Europe. We will look at more of their late prehistory in a
following chapter: here, I shall speak only of the first farmers.
The assemblage called Windmill Hill, which appears in the south of
England, exhibits three different kinds of structures, evidence of
grain-growing and of stock-breeding, and some distinctive types of
pottery and stone implements. The most remarkable type of structure
is the earthwork enclosures which seem to have served as seasonal
cattle corrals. These enclosures were roughly circular, reached over
a thousand feet in diameter, and sometimes included two or three
concentric sets of banks and ditches. Traces of oblong timber houses
have been found, but not within the enclosures. The second type of
structure is mine-shafts, dug down into the chalk beds where good
flint for the making of axes or hoes could be found. The third type
of structure is long simple mounds or unchambered barrows, in one
end of which burials were made. It has been commonly believed that the
Windmill Hill assemblage belonged entirely to the cultural tradition
which moved up through France to the Channel. Professor Piggott is now
convinced, however, that important elements of Windmill Hill stem from
northern Germany and Denmark--products of the first way into Europe
from the east.
The archeological traces of a second early culture are to be found
in the west of England, western and northern Scotland, and most of
Ireland. The bearers of this culture had come up the Atlantic coast
by sea from southern France and Spain. The evidence they have left us
consists mainly of tombs and the contents of tombs, with only very
rare settlement sites. The tombs were of some size and received the
bodies of many people. The tombs themselves were built of stone, heaped
over with earth; the stones enclosed a passage to a central chamber
(passage graves), or to a simple long gallery, along the sides of
which the bodies were laid (gallery graves). The general type of
construction is called megalithic (= great stone), and the whole
earth-mounded structure is often called a _barrow_. Since many have
proper chambers, in one sense or another, we used the term unchambered
barrow above to distinguish those of the Windmill Hill type from these
megalithic structures. There is some evidence for sacrifice, libations,
and ceremonial fires, and it is clear that some form of community
ritual was focused on the megalithic tombs.
The cultures of the people who produced the Windmill Hill assemblage
and of those who made the megalithic tombs flourished, at least in
part, at the same time. Although the distributions of the two different
types of archeological traces are in quite different parts of the
country, there is Windmill Hill pottery in some of the megalithic
tombs. But the tombs also contain pottery which seems to have arrived
with the tomb builders themselves.
The third early British group of antiquities of this general time
(following 2500 B.C.) comes from sites in southern and eastern England.
It is not so certain that the people who made this assemblage, called
Peterborough, were actually farmers. While they may on occasion have
practiced a simple agriculture, many items of their assemblage link
them closely with that of the Forest folk of earlier times in
England and in the Baltic countries. Their pottery is decorated with
impressions of cords and is quite different from that of Windmill Hill
and the megalithic builders. In addition, the distribution of their
finds extends into eastern Britain, where the other cultures have left
no trace. The Peterborough people had villages with semi-subterranean
huts, and the bones of oxen, pigs, and sheep have been found in a few
of these. On the whole, however, hunting and fishing seem to have been
their vital occupations. They also established trade routes especially
to acquire the raw material for stone axes.
A probably slightly later culture, whose traces are best known from
Skara Brae on Orkney, also had its roots in those cultures of the
Baltic area which fused out of the meeting of the Forest folk and
the peoples who took the eastern way into Europe. Skara Brae is very
well preserved, having been built of thin stone slabs about which
dune-sand drifted after the village died. The individual houses, the
bedsteads, the shelves, the chests for clothes and oddments--all built
of thin stone-slabs--may still be seen in place. But the Skara Brae
people lived entirely by sheep- and cattle-breeding, and by catching
shellfish. Neither grain nor the instruments of agriculture appeared at
Skara Brae.
THE EUROPEAN ACHIEVEMENT
The above is only a very brief description of what went on in Britain
with the arrival of the first farmers. There are many interesting
details which I have omitted in order to shorten the story.
I believe some of the difficulty we have in understanding the
establishment of the first farming communities in Europe is with
the word colonization. We have a natural tendency to think of
colonization as it has happened within the last few centuries. In the
case of the colonization of the Americas, for example, the colonists
came relatively quickly, and in increasingly vast numbers. They had
vastly superior technical, political, and war-making skills, compared
with those of the Indians. There was not much mixing with the Indians.
The case in Europe five or six thousand years ago must have been very
different. I wonder if it is even proper to call people colonists
who move some miles to a new region, settle down and farm it for some
years, then move on again, generation after generation? The ideas and
the things which these new people carried were only _potentially_
superior. The ideas and things and the people had to prove themselves
in their adaptation to each new environment. Once this was done another
link to the chain would be added, and then the forest-dwellers and
other indigenous folk of Europe along the way might accept the new
ideas and things. It is quite reasonable to expect that there must have
been much mixture of the migrants and the indigenes along the way; the
Peterborough and Skara Brae assemblages we mentioned above would seem
to be clear traces of such fused cultures. Sometimes, especially if the
migrants were moving by boat, long distances may have been covered in
a short time. Remember, however, we seem to have about three thousand
years between the early Syro-Cilician villages and Windmill Hill.
Let me repeat Professor Childe again. The peoples of the West were
not slavish imitators: they adapted the gifts from the East ... into
a new and organic whole capable of developing on its own original
lines. Childe is of course completely conscious of the fact that his
peoples of the West were in part the descendants of migrants who came
originally from the East, bringing their gifts with them. This
was the late prehistoric achievement of Europe--to take new ideas and
things and some migrant peoples and, by mixing them with the old in its
own environments, to forge a new and unique series of cultures.
What we know of the ways of men suggests to us that when the details
of the later prehistory of further Asia and Africa are learned, their
stories will be just as exciting.
THE Conquest of Civilization
[Illustration]
Now we must return to the Near East again. We are coming to the point
where history is about to begin. I am going to stick pretty close
to Iraq and Egypt in this chapter. These countries will perhaps be
the most interesting to most of us, for the foundations of western
civilization were laid in the river lands of the Tigris and Euphrates
and of the Nile. I shall probably stick closest of all to Iraq, because
things first happened there and also because I know it best.
There is another interesting thing, too. We have seen that the first
experiment in village-farming took place in the Near East. So did
the first experiment in civilization. Both experiments took. The
traditions we live by today are based, ultimately, on those ancient
beginnings in food-production and civilization in the Near East.
WHAT CIVILIZATION MEANS
I shall not try to define civilization for you; rather, I shall
tell you what the word brings to my mind. To me civilization means
urbanization: the fact that there are cities. It means a formal
political set-up--that there are kings or governing bodies that the
people have set up. It means formal laws--rules of conduct--which the
government (if not the people) believes are necessary. It probably
means that there are formalized projects--roads, harbors, irrigation
canals, and the like--and also some sort of army or police force
to protect them. It means quite new and different art forms. It
also usually means there is writing. (The people of the Andes--the
Incas--had everything which goes to make up a civilization but formal
writing. I can see no reason to say they were not civilized.) Finally,
as the late Professor Redfield reminded us, civilization seems to bring
with it the dawn of a new kind of moral order.
In different civilizations, there may be important differences in the
way such things as the above are managed. In early civilizations, it is
usual to find religion very closely tied in with government, law, and
so forth. The king may also be a high priest, or he may even be thought
of as a god. The laws are usually thought to have been given to the
people by the gods. The temples are protected just as carefully as the
other projects.
CIVILIZATION IMPOSSIBLE WITHOUT FOOD-PRODUCTION
Civilizations have to be made up of many people. Some of the people
live in the country; some live in very large towns or cities. Classes
of society have begun. There are officials and government people; there
are priests or religious officials; there are merchants and traders;
there are craftsmen, metal-workers, potters, builders, and so on; there
are also farmers, and these are the people who produce the food for the
whole population. It must be obvious that civilization cannot exist
without food-production and that food-production must also be at a
pretty efficient level of village-farming before civilization can even
begin.
But people can be food-producing without being civilized. In many
parts of the world this is still the case. When the white men first
came to America, the Indians in most parts of this hemisphere were
food-producers. They grew corn, potatoes, tomatoes, squash, and many
other things the white men had never eaten before. But only the Aztecs
of Mexico, the Mayas of Yucatan and Guatemala, and the Incas of the
Andes were civilized.
WHY DIDNT CIVILIZATION COME TO ALL FOOD-PRODUCERS?
Once you have food-production, even at the well-advanced level of
the village-farming community, what else has to happen before you
get civilization? Many men have asked this question and have failed
to give a full and satisfactory answer. There is probably no _one_
answer. I shall give you my own idea about how civilization _may_ have
come about in the Near East alone. Remember, it is only a guess--a
putting together of hunches from incomplete evidence. It is _not_ meant
to explain how civilization began in any of the other areas--China,
southeast Asia, the Americas--where other early experiments in
civilization went on. The details in those areas are quite different.
Whether certain general principles hold, for the appearance of any
early civilization, is still an open and very interesting question.
WHERE CIVILIZATION FIRST APPEARED IN THE NEAR EAST
You remember that our earliest village-farming communities lay along
the hilly flanks of a great crescent. (See map on p. 125.)
Professor Breasteds fertile crescent emphasized the rich river
valleys of the Nile and the Tigris-Euphrates Rivers. Our hilly-flanks
area of the crescent zone arches up from Egypt through Palestine and
Syria, along southern Turkey into northern Iraq, and down along the
southwestern fringe of Iran. The earliest food-producing villages we
know already existed in this area by about 6750 B.C. ( 200 years).
Now notice that this hilly-flanks zone does not include southern
Mesopotamia, the alluvial land of the lower Tigris and Euphrates in
Iraq, or the Nile Valley proper. The earliest known villages of classic
Mesopotamia and Egypt seem to appear fifteen hundred or more years
after those of the hilly-flanks zone. For example, the early Fayum
village which lies near a lake west of the Nile Valley proper (see p.
135) has a radiocarbon date of 4275 B.C. 320 years. It was in the
river lands, however, that the immediate beginnings of civilization
were made.
We know that by about 3200 B.C. the Early Dynastic period had begun
in southern Mesopotamia. The beginnings of writing go back several
hundred years earlier, but we can safely say that civilization had
begun in Mesopotamia by 3200 B.C. In Egypt, the beginning of the First
Dynasty is slightly later, at about 3100 B.C., and writing probably
did not appear much earlier. There is no question but that history and
civilization were well under way in both Mesopotamia and Egypt by 3000
B.C.--about five thousand years ago.
THE HILLY-FLANKS ZONE VERSUS THE RIVER LANDS
Why did these two civilizations spring up in these two river
lands which apparently were not even part of the area where the
village-farming community began? Why didnt we have the first
civilizations in Palestine, Syria, north Iraq, or Iran, where were
sure food-production had had a long time to develop? I think the
probable answer gives a clue to the ways in which civilization began in
Egypt and Mesopotamia.
The land in the hilly flanks is of a sort which people can farm without
too much trouble. There is a fairly fertile coastal strip in Palestine
and Syria. There are pleasant mountain slopes, streams running out to
the sea, and rain, at least in the winter months. The rain belt and the
foothills of the Turkish mountains also extend to northern Iraq and on
to the Iranian plateau. The Iranian plateau has its mountain valleys,
streams, and some rain. These hilly flanks of the crescent, through
most of its arc, are almost made-to-order for beginning farmers. The
grassy slopes of the higher hills would be pasture for their herds
and flocks. As soon as the earliest experiments with agriculture and
domestic animals had been successful, a pleasant living could be
made--and without too much trouble.
I should add here again, that our evidence points increasingly to a
climate for those times which is very little different from that for
the area today. Now look at Egypt and southern Mesopotamia. Both are
lands without rain, for all intents and purposes. Both are lands with
rivers that have laid down very fertile soil--soil perhaps superior to
that in the hilly flanks. But in both lands, the rivers are of no great
aid without some control.
The Nile floods its banks once a year, in late September or early
October. It not only soaks the narrow fertile strip of land on either
side; it lays down a fresh layer of new soil each year. Beyond the
fertile strip on either side rise great cliffs, and behind them is the
desert. In its natural, uncontrolled state, the yearly flood of the
Nile must have caused short-lived swamps that were full of crocodiles.
After a short time, the flood level would have dropped, the water and
the crocodiles would have run back into the river, and the swamp plants
would have become parched and dry.
The Tigris and the Euphrates of Mesopotamia are less likely to flood
regularly than the Nile. The Tigris has a shorter and straighter course
than the Euphrates; it is also the more violent river. Its banks are
high, and when the snows melt and flow into all of its tributary rivers
it is swift and dangerous. The Euphrates has a much longer and more
curving course and few important tributaries. Its banks are lower and
it is less likely to flood dangerously. The land on either side and
between the two rivers is very fertile, south of the modern city of
Baghdad. Unlike the Nile Valley, neither the Tigris nor the Euphrates
is flanked by cliffs. The land on either side of the rivers stretches
out for miles and is not much rougher than a poor tennis court.
THE RIVERS MUST BE CONTROLLED
The real trick in both Egypt and Mesopotamia is to make the rivers work
for you. In Egypt, this is a matter of building dikes and reservoirs
that will catch and hold the Nile flood. In this way, the water is held
and allowed to run off over the fields as it is needed. In Mesopotamia,
it is a matter of taking advantage of natural river channels and branch
channels, and of leading ditches from these onto the fields.
Obviously, we can no longer find the first dikes or reservoirs of
the Nile Valley, or the first canals or ditches of Mesopotamia. The
same land has been lived on far too long for any traces of the first
attempts to be left; or, especially in Egypt, it has been covered by
the yearly deposits of silt, dropped by the river floods. But were
pretty sure the first food-producers of Egypt and southern Mesopotamia
must have made such dikes, canals, and ditches. In the first place,
there cant have been enough rain for them to grow things otherwise.
In the second place, the patterns for such projects seem to have been
pretty well set by historic times.
CONTROL OF THE RIVERS THE BUSINESS OF EVERYONE
Here, then, is a _part_ of the reason why civilization grew in Egypt
and Mesopotamia first--not in Palestine, Syria, or Iran. In the latter
areas, people could manage to produce their food as individuals. It
wasnt too hard; there were rain and some streams, and good pasturage
for the animals even if a crop or two went wrong. In Egypt and
Mesopotamia, people had to put in a much greater amount of work, and
this work couldnt be individual work. Whole villages or groups of
people had to turn out to fix dikes or dig ditches. The dikes had to be
repaired and the ditches carefully cleared of silt each year, or they
would become useless.
There also had to be hard and fast rules. The person who lived nearest
the ditch or the reservoir must not be allowed to take all the water
and leave none for his neighbors. It was not only a business of
learning to control the rivers and of making their waters do the
farmers work. It also meant controlling men. But once these men had
managed both kinds of controls, what a wonderful yield they had! The
soil was already fertile, and the silt which came in the floods and
ditches kept adding fertile soil.
THE GERM OF CIVILIZATION IN EGYPT AND MESOPOTAMIA
This learning to work together for the common good was the real germ of
the Egyptian and the Mesopotamian civilizations. The bare elements of
civilization were already there: the need for a governing hand and for
laws to see that the communities work was done and that the water was
justly shared. You may object that there is a sort of chicken and egg
paradox in this idea. How could the people set up the rules until they
had managed to get a way to live, and how could they manage to get a
way to live until they had set up the rules? I think that small groups
must have moved down along the mud-flats of the river banks quite
early, making use of naturally favorable spots, and that the rules grew
out of such cases. It would have been like the hand-in-hand growth of
automobiles and paved highways in the United States.
Once the rules and the know-how did get going, there must have been a
constant interplay of the two. Thus, the more the crops yielded, the
richer and better-fed the people would have been, and the more the
population would have grown. As the population grew, more land would
have needed to be flooded or irrigated, and more complex systems of
dikes, reservoirs, canals, and ditches would have been built. The more
complex the system, the more necessity for work on new projects and for
the control of their use.... And so on....
What I have just put down for you is a guess at the manner of growth of
some of the formalized systems that go to make up a civilized society.
My explanation has been pointed particularly at Egypt and Mesopotamia.
I have already told you that the irrigation and water-control part of
it does not apply to the development of the Aztecs or the Mayas, or
perhaps anybody else. But I think that a fair part of the story of
Egypt and Mesopotamia must be as Ive just told you.
I am particularly anxious that you do _not_ understand me to mean that
irrigation _caused_ civilization. I am sure it was not that simple at
all. For, in fact, a complex and highly engineered irrigation system
proper did not come until later times. Lets say rather that the simple
beginnings of irrigation allowed and in fact encouraged a great number
of things in the technological, political, social, and moral realms of
culture. We do not yet understand what all these things were or how
they worked. But without these other aspects of culture, I do not
think that urbanization and civilization itself could have come into
being.
THE ARCHEOLOGICAL SEQUENCE TO CIVILIZATION IN IRAQ
We last spoke of the archeological materials of Iraq on page 130,
where I described the village-farming community of Hassunan type. The
Hassunan type villages appear in the hilly-flanks zone and in the
rolling land adjacent to the Tigris in northern Iraq. It is probable
that even before the Hassuna pattern of culture lived its course, a
new assemblage had been established in northern Iraq and Syria. This
assemblage is called Halaf, after a site high on a tributary of the
Euphrates, on the Syro-Turkish border.
[Illustration: SKETCH OF SELECTED ITEMS OF HALAFIAN ASSEMBLAGE
BEADS AND PENDANTS
POTTERY MOTIFS
POTTERY]
The Halafian assemblage is incompletely known. The culture it
represents included a remarkably handsome painted pottery.
Archeologists have tended to be so fascinated with this pottery that
they have bothered little with the rest of the Halafian assemblage. We
do know that strange stone-founded houses, with plans like those of the
popular notion of an Eskimo igloo, were built. Like the pottery of the
Samarran style, which appears as part of the Hassunan assemblage (see
p. 131), the Halafian painted pottery implies great concentration and
excellence of draftsmanship on the part of the people who painted it.
We must mention two very interesting sites adjacent to the mud-flats of
the rivers, half way down from northern Iraq to the classic alluvial
Mesopotamian area. One is Baghouz on the Euphrates; the other is
Samarra on the Tigris (see map, p. 125). Both these sites yield the
handsome painted pottery of the style called Samarran: in fact it
is Samarra which gives its name to the pottery. Neither Baghouz nor
Samarra have completely Hassunan types of assemblages, and at Samarra
there are a few pots of proper Halafian style. I suppose that Samarra
and Baghouz give us glimpses of those early farmers who had begun to
finger their way down the mud-flats of the river banks toward the
fertile but yet untilled southland.
CLASSIC SOUTHERN MESOPOTAMIA FIRST OCCUPIED
Our next step is into the southland proper. Here, deep in the core of
the mound which later became the holy Sumerian city of Eridu, Iraqi
archeologists uncovered a handsome painted pottery. Pottery of the same
type had been noticed earlier by German archeologists on the surface
of a small mound, awash in the spring floods, near the remains of the
Biblical city of Erich (Sumerian = Uruk; Arabic = Warka). This Eridu
pottery, which is about all we have of the assemblage of the people who
once produced it, may be seen as a blend of the Samarran and Halafian
painted pottery styles. This may over-simplify the case, but as yet we
do not have much evidence to go on. The idea does at least fit with my
interpretation of the meaning of Baghouz and Samarra as way-points on
the mud-flats of the rivers half way down from the north.
My colleague, Robert Adams, believes that there were certainly
riverine-adapted food-collectors living in lower Mesopotamia. The
presence of such would explain why the Eridu assemblage is not simply
the sum of the Halafian and Samarran assemblages. But the domesticated
plants and animals and the basic ways of food-production must have
come from the hilly-flanks country in the north.
Above the basal Eridu levels, and at a number of other sites in the
south, comes a full-fledged assemblage called Ubaid. Incidentally,
there is an aspect of the Ubaidian assemblage in the north as well. It
seems to move into place before the Halaf manifestation is finished,
and to blend with it. The Ubaidian assemblage in the south is by far
the more spectacular. The development of the temple has been traced
at Eridu from a simple little structure to a monumental building some
62 feet long, with a pilaster-decorated faade and an altar in its
central chamber. There is painted Ubaidian pottery, but the style is
hurried and somewhat careless and gives the _impression_ of having been
a cheap mass-production means of decoration when compared with the
carefully drafted styles of Samarra and Halaf. The Ubaidian people made
other items of baked clay: sickles and axes of very hard-baked clay
are found. The northern Ubaidian sites have yielded tools of copper,
but metal tools of unquestionable Ubaidian find-spots are not yet
available from the south. Clay figurines of human beings with monstrous
turtle-like faces are another item in the southern Ubaidian assemblage.
[Illustration: SKETCH OF SELECTED ITEMS OF UBAIDIAN ASSEMBLAGE]
There is a large Ubaid cemetery at Eridu, much of it still awaiting
excavation. The few skeletons so far tentatively studied reveal a
completely modern type of Mediterraneanoid; the individuals whom the
skeletons represent would undoubtedly blend perfectly into the modern
population of southern Iraq. What the Ubaidian assemblage says to us is
that these people had already adapted themselves and their culture to
the peculiar riverine environment of classic southern Mesopotamia. For
example, hard-baked clay axes will chop bundles of reeds very well, or
help a mason dress his unbaked mud bricks, and there were only a few
soft and pithy species of trees available. The Ubaidian levels of Eridu
yield quantities of date pits; that excellent and characteristically
Iraqi fruit was already in use. The excavators also found the clay
model of a ship, with the stepping-point for a mast, so that Sinbad the
Sailor must have had his antecedents as early as the time of Ubaid.
The bones of fish, which must have flourished in the larger canals as
well as in the rivers, are common in the Ubaidian levels and thereafter.
THE UBAIDIAN ACHIEVEMENT
On present evidence, my tendency is to see the Ubaidian assemblage
in southern Iraq as the trace of a new era. I wish there were more
evidence, but what we have suggests this to me. The culture of southern
Ubaid soon became a culture of towns--of centrally located towns with
some rural villages about them. The town had a temple and there must
have been priests. These priests probably had political and economic
functions as well as religious ones, if the somewhat later history of
Mesopotamia may suggest a pattern for us. Presently the temple and its
priesthood were possibly the focus of the market; the temple received
its due, and may already have had its own lands and herds and flocks.
The people of the town, undoubtedly at least in consultation with the
temple administration, planned and maintained the simple irrigation
ditches. As the system flourished, the community of rural farmers would
have produced more than sufficient food. The tendency for specialized
crafts to develop--tentative at best at the cultural level of the
earlier village-farming community era--would now have been achieved,
and probably many other specialists in temple administration, water
control, architecture, and trade would also have appeared, as the
surplus food-supply was assured.
Southern Mesopotamia is not a land rich in natural resources other
than its fertile soil. Stone, good wood for construction, metal, and
innumerable other things would have had to be imported. Grain and
dates--although both are bulky and difficult to transport--and wool and
woven stuffs must have been the mediums of exchange. Over what area did
the trading net-work of Ubaid extend? We start with the idea that the
Ubaidian assemblage is most richly developed in the south. We assume, I
think, correctly, that it represents a cultural flowering of the south.
On the basis of the pottery of the still elusive Eridu immigrants
who had first followed the rivers into alluvial Mesopotamia, we get
the notion that the characteristic painted pottery style of Ubaid
was developed in the southland. If this reconstruction is correct
then we may watch with interest where the Ubaid pottery-painting
tradition spread. We have already mentioned that there is a substantial
assemblage of (and from the southern point of view, _fairly_ pure)
Ubaidian material in northern Iraq. The pottery appears all along the
Iranian flanks, even well east of the head of the Persian Gulf, and
ends in a later and spectacular flourish in an extremely handsome
painted style called the Susa style. Ubaidian pottery has been noted
up the valleys of both of the great rivers, well north of the Iraqi
and Syrian borders on the southern flanks of the Anatolian plateau.
It reaches the Mediterranean Sea and the valley of the Orontes in
Syria, and it may be faintly reflected in the painted style of a
site called Ghassul, on the east bank of the Jordan in the Dead Sea
Valley. Over this vast area--certainly in all of the great basin of
the Tigris-Euphrates drainage system and its natural extensions--I
believe we may lay our fingers on the traces of a peculiar way of
decorating pottery, which we call Ubaidian. This cursive and even
slap-dash decoration, it appears to me, was part of a new cultural
tradition which arose from the adjustments which immigrant northern
farmers first made to the new and challenging environment of southern
Mesopotamia. But exciting as the idea of the spread of influences of
the Ubaid tradition in space may be, I believe you will agree that the
consequences of the growth of that tradition in southern Mesopotamia
itself, as time passed, are even more important.
THE WARKA PHASE IN THE SOUTH
So far, there are only two radiocarbon determinations for the Ubaidian
assemblage, one from Tepe Gawra in the north and one from Warka in the
south. My hunch would be to use the dates 4500 to 3750 B.C., with a
plus or more probably a minus factor of about two hundred years for
each, as the time duration of the Ubaidian assemblage in southern
Mesopotamia.
Next, much to our annoyance, we have what is almost a temporary
black-out. According to the system of terminology I favor, our next
assemblage after that of Ubaid is called the _Warka_ phase, from
the Arabic name for the site of Uruk or Erich. We know it only from
six or seven levels in a narrow test-pit at Warka, and from an even
smaller hole at another site. This assemblage, so far, is known only
by its pottery, some of which still bears Ubaidian style painting. The
characteristic Warkan pottery is unpainted, with smoothed red or gray
surfaces and peculiar shapes. Unquestionably, there must be a great
deal more to say about the Warkan assemblage, but someone will first
have to excavate it!
THE DAWN OF CIVILIZATION
After our exasperation with the almost unknown Warka interlude,
following the brilliant false dawn of Ubaid, we move next to an
assemblage which yields traces of a preponderance of those elements
which we noted (p. 144) as meaning civilization. This assemblage
is that called _Proto-Literate_; it already contains writing. On
the somewhat shaky principle that writing, however early, means
history--and no longer prehistory--the assemblage is named for the
historical implications of its content, and no longer after the name of
the site where it was first found. Since some of the older books used
site-names for this assemblage, I will tell you that the Proto-Literate
includes the latter half of what used to be called the Uruk period
_plus_ all of what used to be called the Jemdet Nasr period. It shows
a consistent development from beginning to end.
I shall, in fact, leave much of the description and the historic
implications of the Proto-Literate assemblage to the conventional
historians. Professor T. J. Jacobsen, reaching backward from the
legends he finds in the cuneiform writings of slightly later times, can
in fact tell you a more complete story of Proto-Literate culture than
I can. It should be enough here if I sum up briefly what the excavated
archeological evidence shows.
We have yet to dig a Proto-Literate site in its entirety, but the
indications are that the sites cover areas the size of small cities.
In architecture, we know of large and monumental temple structures,
which were built on elaborate high terraces. The plans and decoration
of these temples follow the pattern set in the Ubaid phase: the chief
difference is one of size. The German excavators at the site of Warka
reckoned that the construction of only one of the Proto-Literate temple
complexes there must have taken 1,500 men, each working a ten-hour day,
five years to build.
ART AND WRITING
If the architecture, even in its monumental forms, can be seen to
stem from Ubaidian developments, this is not so with our other
evidence of Proto-Literate artistic expression. In relief and applied
sculpture, in sculpture in the round, and on the engraved cylinder
seals--all of which now make their appearance--several completely
new artistic principles are apparent. These include the composition
of subject-matter in groups, commemorative scenes, and especially
the ability and apparent desire to render the human form and face.
Excellent as the animals of the Franco-Cantabrian art may have been
(see p. 85), and however handsome were the carefully drafted
geometric designs and conventionalized figures on the pottery of the
early farmers, there seems to have been, up to this time, a mental
block about the drawing of the human figure and especially the human
face. We do not yet know what caused this self-consciousness about
picturing themselves which seems characteristic of men before the
appearance of civilization. We do know that with civilization, the
mental block seems to have been removed.
Clay tablets bearing pictographic signs are the Proto-Literate
forerunners of cuneiform writing. The earliest examples are not well
understood but they seem to be devices for making accounts and
for remembering accounts. Different from the later case in Egypt,
where writing appears fully formed in the earliest examples, the
development from simple pictographic signs to proper cuneiform writing
may be traced, step by step, in Mesopotamia. It is most probable
that the development of writing was connected with the temple and
the need for keeping account of the temples possessions. Professor
Jacobsen sees writing as a means for overcoming space, time, and the
increasing complications of human affairs: Literacy, which began
with ... civilization, enhanced mightily those very tendencies in its
development which characterize it as a civilization and mark it off as
such from other types of culture.
[Illustration: RELIEF ON A PROTO-LITERATE STONE VASE, WARKA
Unrolled drawing, with restoration suggested by figures from
contemporary cylinder seals]
While the new principles in art and the idea of writing are not
foreshadowed in the Ubaid phase, or in what little we know of the
Warkan, I do not think we need to look outside southern Mesopotamia
for their beginnings. We do know something of the adjacent areas,
too, and these beginnings are not there. I think we must accept them
as completely new discoveries, made by the people who were developing
the whole new culture pattern of classic southern Mesopotamia. Full
description of the art, architecture, and writing of the Proto-Literate
phase would call for many details. Men like Professor Jacobsen and Dr.
Adams can give you these details much better than I can. Nor shall I do
more than tell you that the common pottery of the Proto-Literate phase
was so well standardized that it looks factory made. There was also
some handsome painted pottery, and there were stone bowls with inlaid
decoration. Well-made tools in metal had by now become fairly common,
and the metallurgist was experimenting with the casting process. Signs
for plows have been identified in the early pictographs, and a wheeled
chariot is shown on a cylinder seal engraving. But if I were forced to
a guess in the matter, I would say that the development of plows and
draft-animals probably began in the Ubaid period and was another of the
great innovations of that time.
The Proto-Literate assemblage clearly suggests a highly developed and
sophisticated culture. While perhaps not yet fully urban, it is on
the threshold of urbanization. There seems to have been a very dense
settlement of Proto-Literate sites in classic southern Mesopotamia,
many of them newly founded on virgin soil where no earlier settlements
had been. When we think for a moment of what all this implies, of the
growth of an irrigation system which must have existed to allow the
flourish of this culture, and of the social and political organization
necessary to maintain the irrigation system, I think we will agree that
at last we are dealing with civilization proper.
FROM PREHISTORY TO HISTORY
Now it is time for the conventional ancient historians to take over
the story from me. Remember this when you read what they write. Their
real base-line is with cultures ruled over by later kings and emperors,
whose writings describe military campaigns and the administration of
laws and fully organized trading ventures. To these historians, the
Proto-Literate phase is still a simple beginning for what is to follow.
If they mention the Ubaid assemblage at all--the one I was so lyrical
about--it will be as some dim and fumbling step on the path to the
civilized way of life.
I suppose you could say that the difference in the approach is that as
a prehistorian I have been looking forward or upward in time, while the
historians look backward to glimpse what Ive been describing here. My
base-line was half a million years ago with a being who had little more
than the capacity to make tools and fire to distinguish him from the
animals about him. Thus my point of view and that of the conventional
historian are bound to be different. You will need both if you want to
understand all of the story of men, as they lived through time to the
present.
End of PREHISTORY
[Illustration]
Youll doubtless easily recall your general course in ancient history:
how the Sumerian dynasties of Mesopotamia were supplanted by those of
Babylonia, how the Hittite kingdom appeared in Anatolian Turkey, and
about the three great phases of Egyptian history. The literate kingdom
of Crete arose, and by 1500 B.C. there were splendid fortified Mycenean
towns on the mainland of Greece. This was the time--about the whole
eastern end of the Mediterranean--of what Professor Breasted called the
first great internationalism, with flourishing trade, international
treaties, and royal marriages between Egyptians, Babylonians, and
Hittites. By 1200 B.C., the whole thing had fragmented: the peoples of
the sea were restless in their isles, and the great ancient centers in
Egypt, Mesopotamia, and Anatolia were eclipsed. Numerous smaller states
arose--Assyria, Phoenicia, Israel--and the Trojan war was fought.
Finally Assyria became the paramount power of all the Near East,
presently to be replaced by Persia.
A new culture, partaking of older west Asiatic and Egyptian elements,
but casting them with its own tradition into a new mould, arose in
mainland Greece.
I once shocked my Classical colleagues to the core by referring to
Greece as a second degree derived civilization, but there is much
truth in this. The principles of bronze- and then of iron-working, of
the alphabet, and of many other elements in Greek culture were borrowed
from western Asia. Our debt to the Greeks is too well known for me even
to mention it, beyond recalling to you that it is to Greece we owe the
beginnings of rational or empirical science and thought in general. But
Greece fell in its turn to Rome, and in 55 B.C. Caesar invaded Britain.
I last spoke of Britain on page 142; I had chosen it as my single
example for telling you something of how the earliest farming
communities were established in Europe. Now I will continue with
Britains later prehistory, so you may sense something of the end of
prehistory itself. Remember that Britain is simply a single example
we select; the same thing could be done for all the other countries
of Europe, and will be possible also, some day, for further Asia and
Africa. Remember, too, that prehistory in most of Europe runs on for
three thousand or more years _after_ conventional ancient history
begins in the Near East. Britain is a good example to use in showing
how prehistory ended in Europe. As we said earlier, it lies at the
opposite end of Europe from the area of highest cultural achievement in
those times, and should you care to read more of the story in detail,
you may do so in the English language.
METAL USERS REACH ENGLAND
We left the story of Britain with the peoples who made three different
assemblages--the Windmill Hill, the megalith-builders, and the
Peterborough--making adjustments to their environments, to the original
inhabitants of the island, and to each other. They had first arrived
about 2500 B.C., and were simple pastoralists and hoe cultivators who
lived in little village communities. Some of them planted little if any
grain. By 2000 B.C., they were well settled in. Then, somewhere in the
range from about 1900 to 1800 B.C., the traces of the invasion of a new
series of peoples began to appear.
The first newcomers are called the Beaker folk, after the name of a
peculiar form of pottery they made. The beaker type of pottery seems
oldest in Spain, where it occurs with great collective tombs of
megalithic construction and with copper tools. But the Beaker folk who
reached England seem already to have moved first from Spain(?) to the
Rhineland and Holland. While in the Rhineland, and before leaving for
England, the Beaker folk seem to have mixed with the local population
and also with incomers from northeastern Europe whose culture included
elements brought originally from the Near East by the eastern way
through the steppes. This last group has also been named for a peculiar
article in its assemblage; the group is called the Battle-axe folk. A
few Battle-axe folk elements, including, in fact, stone battle-axes,
reached England with the earliest Beaker folk,[6] coming from the
Rhineland.
[6] The British authors use the term Beaker folk to mean both
archeological assemblage and human physical type. They speak
of a ... tall, heavy-boned, rugged, and round-headed strain
which they take to have developed, apparently in the Rhineland,
by a mixture of the original (Spanish?) beaker-makers and
the northeast European battle-axe makers. However, since the
science of physical anthropology is very much in flux at the
moment, and since I am not able to assess the evidence for these
physical types, I _do not_ use the term folk in this book with
its usual meaning of standardized physical type. When I use
folk here, I mean simply _the makers of a given archeological
assemblage_. The difficulty only comes when assemblages are
named for some item in them; it is too clumsy to make an
adjective of the item and refer to a beakerian assemblage.
The Beaker folk settled earliest in the agriculturally fertile south
and east. There seem to have been several phases of Beaker folk
invasions, and it is not clear whether these all came strictly from the
Rhineland or Holland. We do know that their copper daggers and awls
and armlets are more of Irish or Atlantic European than of Rhineland
origin. A few simple habitation sites and many burials of the Beaker
folk are known. They buried their dead singly, sometimes in conspicuous
individual barrows with the dead warrior in his full trappings. The
spectacular element in the assemblage of the Beaker folk is a group
of large circular monuments with ditches and with uprights of wood or
stone. These henges became truly monumental several hundred years
later; while they were occasionally dedicated with a burial, they were
not primarily tombs. The effect of the invasion of the Beaker folk
seems to cut across the whole fabric of life in Britain.
[Illustration: BEAKER]
There was, however, a second major element in British life at this
time. It shows itself in the less well understood traces of a group
again called after one of the items in their catalogue, the Food-vessel
folk. There are many burials in these food-vessel pots in northern
England, Scotland, and Ireland, and the pottery itself seems to
link back to that of the Peterborough assemblage. Like the earlier
Peterborough people in the highland zone before them, the makers of
the food-vessels seem to have been heavily involved in trade. It is
quite proper to wonder whether the food-vessel pottery itself was made
by local women who were married to traders who were middlemen in the
transmission of Irish metal objects to north Germany and Scandinavia.
The belt of high, relatively woodless country, from southwest to
northeast, was already established as a natural route for inland trade.
MORE INVASIONS
About 1500 B.C., the situation became further complicated by the
arrival of new people in the region of southern England anciently
called Wessex. The traces suggest the Brittany coast of France as a
source, and the people seem at first to have been a small but heroic
group of aristocrats. Their heroes are buried with wealth and
ceremony, surrounded by their axes and daggers of bronze, their gold
ornaments, and amber and jet beads. These rich finds show that the
trade-linkage these warriors patronized spread from the Baltic sources
of amber to Mycenean Greece or even Egypt, as evidenced by glazed blue
beads.
The great visual trace of Wessex achievement is the final form of
the spectacular sanctuary at Stonehenge. A wooden henge or circular
monument was first made several hundred years earlier, but the site
now received its great circles of stone uprights and lintels. The
diameter of the surrounding ditch at Stonehenge is about 350 feet, the
diameter of the inner circle of large stones is about 100 feet, and
the tallest stone of the innermost horseshoe-shaped enclosure is 29
feet 8 inches high. One circle is made of blue stones which must have
been transported from Pembrokeshire, 145 miles away as the crow flies.
Recently, many carvings representing the profile of a standard type of
bronze axe of the time, and several profiles of bronze daggers--one of
which has been called Mycenean in type--have been found carved in the
stones. We cannot, of course, describe the details of the religious
ceremonies which must have been staged in Stonehenge, but we can
certainly imagine the well-integrated and smoothly working culture
which must have been necessary before such a great monument could have
been built.
THIS ENGLAND
The range from 1900 to about 1400 B.C. includes the time of development
of the archeological features usually called the Early Bronze Age
in Britain. In fact, traces of the Wessex warriors persisted down to
about 1200 B.C. The main regions of the island were populated, and the
adjustments to the highland and lowland zones were distinct and well
marked. The different aspects of the assemblages of the Beaker folk and
the clearly expressed activities of the Food-vessel folk and the Wessex
warriors show that Britain was already taking on her characteristic
trading role, separated from the European continent but conveniently
adjacent to it. The tin of Cornwall--so important in the production
of good bronze--as well as the copper of the west and of Ireland,
taken with the gold of Ireland and the general excellence of Irish
metal work, assured Britain a traders place in the then known world.
Contacts with the eastern Mediterranean may have been by sea, with
Cornish tin as the attraction, or may have been made by the Food-vessel
middlemen on their trips to the Baltic coast. There they would have
encountered traders who traveled the great north-south European road,
by which Baltic amber moved southward to Greece and the Levant, and
ideas and things moved northward again.
There was, however, the Channel between England and Europe, and this
relative isolation gave some peace and also gave time for a leveling
and further fusion of culture. The separate cultural traditions began
to have more in common. The growing of barley, the herding of sheep and
cattle, and the production of woolen garments were already features
common to all Britains inhabitants save a few in the remote highlands,
the far north, and the distant islands not yet fully touched by
food-production. The personality of Britain was being formed.
CREMATION BURIALS BEGIN
Along with people of certain religious faiths, archeologists are
against cremation (for other people!). Individuals to be cremated seem
in past times to have been dressed in their trappings and put upon a
large pyre: it takes a lot of wood and a very hot fire for a thorough
cremation. When the burning had been completed, the few fragile scraps
of bone and such odd beads of stone or other rare items as had resisted
the great heat seem to have been whisked into a pot and the pot buried.
The archeologist is left with the pot and the unsatisfactory scraps in
it.
Tentatively, after about 1400 B.C. and almost completely over the whole
island by 1200 B.C., Britain became the scene of cremation burials
in urns. We know very little of the people themselves. None of their
settlements have been identified, although there is evidence that they
grew barley and made enclosures for cattle. The urns used for the
burials seem to have antecedents in the pottery of the Food-vessel
folk, and there are some other links with earlier British traditions.
In Lancashire, a wooden circle seems to have been built about a grave
with cremated burials in urns. Even occasional instances of cremation
may be noticed earlier in Britain, and it is not clear what, if any,
connection the British cremation burials in urns have with the classic
_Urnfields_ which were now beginning in the east Mediterranean and
which we shall mention below.
The British cremation-burial-in-urns folk survived a long time in the
highland zone. In the general British scheme, they make up what is
called the Middle Bronze Age, but in the highland zone they last
until after 900 B.C. and are considered to be a specialized highland
Late Bronze Age. In the highland zone, these later cremation-burial
folk seem to have continued the older Food-vessel tradition of being
middlemen in the metal market.
Granting that our knowledge of this phase of British prehistory is
very restricted because the cremations have left so little for the
archeologist, it does not appear that the cremation-burial-urn folk can
be sharply set off from their immediate predecessors. But change on a
grander scale was on the way.
REVERBERATIONS FROM CENTRAL EUROPE
In the centuries immediately following 1000 B.C., we see with fair
clarity two phases of a cultural process which must have been going
on for some time. Certainly several of the invasions we have already
described in this chapter were due to earlier phases of the same
cultural process, but we could not see the details.
[Illustration: SLASHING SWORD]
Around 1200 B.C. central Europe was upset by the spread of the
so-called Urnfield folk, who practiced cremation burial in urns and
whom we also know to have been possessors of long, slashing swords and
the horse. I told you above that we have no idea that the Urnfield
folk proper were in any way connected with the people who made
cremation-burial-urn cemeteries a century or so earlier in Britain. It
has been supposed that the Urnfield folk themselves may have shared
ideas with the people who sacked Troy. We know that the Urnfield
pressure from central Europe displaced other people in northern France,
and perhaps in northwestern Germany, and that this reverberated into
Britain about 1000 B.C.
Soon after 750 B.C., the same thing happened again. This time, the
pressure from central Europe came from the Hallstatt folk who were iron
tool makers: the reverberation brought people from the western Alpine
region across the Channel into Britain.
At first it is possible to see the separate results of these folk
movements, but the developing cultures soon fused with each other and
with earlier British elements. Presently there were also strains of
other northern and western European pottery and traces of Urnfield
practices themselves which appeared in the finished British product. I
hope you will sense that I am vastly over-simplifying the details.
The result seems to have been--among other things--a new kind of
agricultural system. The land was marked off by ditched divisions.
Rectangular fields imply the plow rather than hoe cultivation. We seem
to get a picture of estate or tribal boundaries which included village
communities; we find a variety of tools in bronze, and even whetstones
which show that iron has been honed on them (although the scarce iron
has not been found). Let me give you the picture in Professor S.
Piggotts words: The ... Late Bronze Age of southern England was but
the forerunner of the earliest Iron Age in the same region, not only in
the techniques of agriculture, but almost certainly in terms of ethnic
kinship ... we can with some assurance talk of the Celts ... the great
early Celtic expansion of the Continent is recognized to be that of the
Urnfield people.
Thus, certainly by 500 B.C., there were people in Britain, some of
whose descendants we may recognize today in name or language in remote
parts of Wales, Scotland, and the Hebrides.
THE COMING OF IRON
Iron--once the know-how of reducing it from its ore in a very hot,
closed fire has been achieved--produces a far cheaper and much more
efficient set of tools than does bronze. Iron tools seem first to
have been made in quantity in Hittite Anatolia about 1500 B.C. In
continental Europe, the earliest, so-called Hallstatt, iron-using
cultures appeared in Germany soon after 750 B.C. Somewhat later,
Greek and especially Etruscan exports of _objets dart_--which moved
with a flourishing trans-Alpine wine trade--influenced the Hallstatt
iron-working tradition. Still later new classical motifs, together with
older Hallstatt, oriental, and northern nomad motifs, gave rise to a
new style in metal decoration which characterizes the so-called La Tne
phase.
A few iron users reached Britain a little before 400 B.C. Not long
after that, a number of allied groups appeared in southern and
southeastern England. They came over the Channel from France and must
have been Celts with dialects related to those already in England. A
second wave of Celts arrived from the Marne district in France about
250 B.C. Finally, in the second quarter of the first century B.C.,
there were several groups of newcomers, some of whom were Belgae of
a mixed Teutonic-Celtic confederacy of tribes in northern France and
Belgium. The Belgae preceded the Romans by only a few years.
HILL-FORTS AND FARMS
The earliest iron-users seem to have entrenched themselves temporarily
within hill-top forts, mainly in the south. Gradually, they moved
inland, establishing _individual_ farm sites with extensive systems
of rectangular fields. We recognize these fields by the lynchets or
lines of soil-creep which plowing left on the slopes of hills. New
crops appeared; there were now bread wheat, oats, and rye, as well as
barley.
At Little Woodbury, near the town of Salisbury, a farmstead has been
rather completely excavated. The rustic buildings were within a
palisade, the round house itself was built of wood, and there were
various outbuildings and pits for the storage of grain. Weaving was
done on the farm, but not blacksmithing, which must have been a
specialized trade. Save for the lack of firearms, the place might
almost be taken for a farmstead on the American frontier in the early
1800s.
Toward 250 B.C. there seems to have been a hasty attempt to repair the
hill-forts and to build new ones, evidently in response to signs of
restlessness being shown by remote relatives in France.
THE SECOND PHASE
Perhaps the hill-forts were not entirely effective or perhaps a
compromise was reached. In any case, the newcomers from the Marne
district did establish themselves, first in the southeast and then to
the north and west. They brought iron with decoration of the La Tne
type and also the two-wheeled chariot. Like the Wessex warriors of
over a thousand years earlier, they made heroes graves, with their
warriors buried in the war-chariots and dressed in full trappings.
[Illustration: CELTIC BUCKLE]
The metal work of these Marnian newcomers is excellent. The peculiar
Celtic art style, based originally on the classic tendril motif,
is colorful and virile, and fits with Greek and Roman descriptions
of Celtic love of color in dress. There is a strong trace of these
newcomers northward in Yorkshire, linked by Ptolemys description to
the Parisii, doubtless part of the Celtic tribe which originally gave
its name to Paris on the Seine. Near Glastonbury, in Somerset, two
villages in swamps have been excavated. They seem to date toward the
middle of the first century B.C., which was a troubled time in Britain.
The circular houses were built on timber platforms surrounded with
palisades. The preservation of antiquities by the water-logged peat of
the swamp has yielded us a long catalogue of the materials of these
villagers.
In Scotland, which yields its first iron tools at a date of about 100
B.C., and in northern Ireland even slightly earlier, the effects of the
two phases of newcomers tend especially to blend. Hill-forts, brochs
(stone-built round towers) and a variety of other strange structures
seem to appear as the new ideas develop in the comparative isolation of
northern Britain.
THE THIRD PHASE
For the time of about the middle of the first century B.C., we again
see traces of frantic hill-fort construction. This simple military
architecture now took some new forms. Its multiple ramparts must
reflect the use of slings as missiles, rather than spears. We probably
know the reason. In 56 B.C., Julius Caesar chastised the Veneti of
Brittany for outraging the dignity of Roman ambassadors. The Veneti
were famous slingers, and doubtless the reverberations of escaping
Veneti were felt across the Channel. The military architecture suggests
that some Veneti did escape to Britain.
Also, through Caesar, we learn the names of newcomers who arrived in
two waves, about 75 B.C. and about 50 B.C. These were the Belgae. Now,
at last, we can even begin to speak of dynasties and individuals.
Some time before 55 B.C., the Catuvellauni, originally from the Marne
district in France, had possessed themselves of a large part of
southeastern England. They evidently sailed up the Thames and built a
town of over a hundred acres in area. Here ruled Cassivellaunus, the
first man in England whose name we know, and whose town Caesar sacked.
The town sprang up elsewhere again, however.
THE END OF PREHISTORY
Prehistory, strictly speaking, is now over in southern Britain.
Claudius effective invasion took place in 43 A.D.; by 83 A.D., a raid
had been made as far north as Aberdeen in Scotland. But by 127 A.D.,
Hadrian had completed his wall from the Solway to the Tyne, and the
Romans settled behind it. In Scotland, Romanization can have affected
the countryside very little. Professor Piggott adds that ... it is
when the pressure of Romanization is relaxed by the break-up of the
Dark Ages that we see again the Celtic metal-smiths handling their
material with the same consummate skill as they had before the Roman
Conquest, and with traditional styles that had not even then forgotten
their Marnian and Belgic heritage.
In fact, many centuries go by, in Britain as well as in the rest of
Europe, before the archeologists task is complete and the historian on
his own is able to describe the ways of men in the past.
BRITAIN AS A SAMPLE OF THE GENERAL COURSE OF PREHISTORY IN EUROPE
In giving this very brief outline of the later prehistory of Britain,
you will have noticed how often I had to refer to the European
continent itself. Britain, beyond the English Channel for all of her
later prehistory, had a much simpler course of events than did most of
the rest of Europe in later prehistoric times. This holds, in spite
of all the invasions and reverberations from the continent. Most
of Europe was the scene of an even more complicated ebb and flow of
cultural change, save in some of its more remote mountain valleys and
peninsulas.
The whole course of later prehistory in Europe is, in fact, so very
complicated that there is no single good book to cover it all;
certainly there is none in English. There are some good regional
accounts and some good general accounts of part of the range from about
3000 B.C. to A.D. 1. I suspect that the difficulty of making a good
book that covers all of its later prehistory is another aspect of what
makes Europe so very complicated a continent today. The prehistoric
foundations for Europes very complicated set of civilizations,
cultures, and sub-cultures--which begin to appear as history
proceeds--were in themselves very complicated.
Hence, I selected the case of Britain as a single example of how
prehistory ends in Europe. It could have been more complicated than we
found it to be. Even in the subject matter on Britain in the chapter
before the last, we did not see direct traces of the effect on Britain
of the very important developments which took place in the Danubian
way from the Near East. Apparently Britain was not affected. Britain
received the impulses which brought copper, bronze, and iron tools from
an original east Mediterranean homeland into Europe, almost at the ends
of their journeys. But by the same token, they had had time en route to
take on their characteristic European aspects.
Some time ago, Sir Cyril Fox wrote a famous book called _The
Personality of Britain_, sub-titled Its Influence on Inhabitant and
Invader in Prehistoric and Early Historic Times. We have not gone
into the post-Roman early historic period here; there are still the
Anglo-Saxons and Normans to account for as well as the effects of
the Romans. But what I have tried to do was to begin the story of
how the personality of Britain was formed. The principles that Fox
used, in trying to balance cultural and environmental factors and
interrelationships would not be greatly different for other lands.
Summary
[Illustration]
In the pages you have read so far, you have been brought through the
earliest 99 per cent of the story of mans life on this planet. I have
left only 1 per cent of the story for the historians to tell.
THE DRAMA OF THE PAST
Men first became men when evolution had carried them to a certain
point. This was the point where the eye-hand-brain co-ordination was
good enough so that tools could be made. When tools began to be made
according to sets of lasting habits, we know that men had appeared.
This happened over a half million years ago. The stage for the play
may have been as broad as all of Europe, Africa, and Asia. At least,
it seems unlikely that it was only one little region that saw the
beginning of the drama.
Glaciers and different climates came and went, to change the settings.
But the play went on in the same first act for a very long time. The
men who were the players had simple roles. They had to feed themselves
and protect themselves as best they could. They did this by hunting,
catching, and finding food wherever they could, and by taking such
protection as caves, fire, and their simple tools would give them.
Before the first act was over, the last of the glaciers was melting
away, and the players had added the New World to their stage. If
we want a special name for the first act, we could call it _The
Food-Gatherers_.
There were not many climaxes in the first act, so far as we can see.
But I think there may have been a few. Certainly the pace of the
first act accelerated with the swing from simple gathering to more
intensified collecting. The great cave art of France and Spain was
probably an expression of a climax. Even the ideas of burying the dead
and of the Venus figurines must also point to levels of human thought
and activity that were over and above pure food-getting.
THE SECOND ACT
The second act began only about ten thousand years ago. A few of the
players started it by themselves near the center of the Old World part
of the stage, in the Near East. It began as a plant and animal act, but
it soon became much more complicated.
But the players in this one part of the stage--in the Near East--were
not the only ones to start off on the second act by themselves. Other
players, possibly in several places in the Far East, and certainly in
the New World, also started second acts that began as plant and animal
acts, and then became complicated. We can call the whole second act
_The Food-Producers_.
THE FIRST GREAT CLIMAX OF THE SECOND ACT
In the Near East, the first marked climax of the second act happened
in Mesopotamia and Egypt. The play and the players reached that great
climax that we call civilization. This seems to have come less than
five thousand years after the second act began. But it could never have
happened in the first act at all.
There is another curious thing about the first act. Many of the players
didnt know it was over and they kept on with their roles long after
the second act had begun. On the edges of the stage there are today
some players who are still going on with the first act. The Eskimos,
and the native Australians, and certain tribes in the Amazon jungle are
some of these players. They seem perfectly happy to keep on with the
first act.
The second act moved from climax to climax. The civilizations of
Mesopotamia and Egypt were only the earliest of these climaxes. The
players to the west caught the spirit of the thing, and climaxes
followed there. So also did climaxes come in the Far Eastern and New
World portions of the stage.
The greater part of the second act should really be described to you
by a historian. Although it was a very short act when compared to the
first one, the climaxes complicate it a great deal. I, a prehistorian,
have told you about only the first act, and the very beginning of the
second.
THE THIRD ACT
Also, as a prehistorian I probably should not even mention the third
act--it began so recently. The third act is _The Industrialization_.
It is the one in which we ourselves are players. If the pace of the
second act was so much faster than that of the first, the pace of the
third act is terrific. The danger is that it may wear down the players
completely.
What sort of climaxes will the third act have, and are we already in
one? You have seen by now that the acts of my play are given in terms
of modes or basic patterns of human economy--ways in which people
get food and protection and safety. The climaxes involve more than
human economy. Economics and technological factors may be part of the
climaxes, but they are not all. The climaxes may be revolutions in
their own way, intellectual and social revolutions if you like.
If the third act follows the pattern of the second act, a climax should
come soon after the act begins. We may be due for one soon if we are
not already in it. Remember the terrific pace of this third act.
WHY BOTHER WITH PREHISTORY?
Why do we bother about prehistory? The main reason is that we think it
may point to useful ideas for the present. We are in the troublesome
beginnings of the third act of the play. The beginnings of the second
act may have lessons for us and give depth to our thinking. I know
there are at least _some_ lessons, even in the present incomplete
state of our knowledge. The players who began the second act--that of
food-production--separately, in different parts of the world, were not
all of one pure race nor did they have pure cultural traditions.
Some apparently quite mixed Mediterraneans got off to the first start
on the second act and brought it to its first two climaxes as well.
Peoples of quite different physical type achieved the first climaxes in
China and in the New World.
In our British example of how the late prehistory of Europe worked, we
listed a continuous series of invasions and reverberations. After
each of these came fusion. Even though the Channel protected Britain
from some of the extreme complications of the mixture and fusion of
continental Europe, you can see how silly it would be to refer to a
pure British race or a pure British culture. We speak of the United
States as a melting pot. But this is nothing new. Actually, Britain
and all the rest of the world have been melting pots at one time or
another.
By the time the written records of Mesopotamia and Egypt begin to turn
up in number, the climaxes there are well under way. To understand the
beginnings of the climaxes, and the real beginnings of the second act
itself, we are thrown back on prehistoric archeology. And this is as
true for China, India, Middle America, and the Andes, as it is for the
Near East.
There are lessons to be learned from all of mans past, not simply
lessons of how to fight battles or win peace conferences, but of how
human society evolves from one stage to another. Many of these lessons
can only be looked for in the prehistoric past. So far, we have only
made a beginning. There is much still to do, and many gaps in the story
are yet to be filled. The prehistorians job is to find the evidence,
to fill the gaps, and to discover the lessons men have learned in the
past. As I see it, this is not only an exciting but a very practical
goal for which to strive.
List of Books
BOOKS OF GENERAL INTEREST
(Chosen from a variety of the increasingly useful list of cheap
paperbound books.)
Childe, V. Gordon
_What Happened in History._ 1954. Penguin.
_Man Makes Himself._ 1955. Mentor.
_The Prehistory of European Society._ 1958. Penguin.
Dunn, L. C., and Dobzhansky, Th.
_Heredity, Race, and Society._ 1952. Mentor.
Frankfort, Henri, Frankfort, H. A., Jacobsen, Thorkild, and Wilson,
John A.
_Before Philosophy._ 1954. Penguin.
Simpson, George G.
_The Meaning of Evolution._ 1955. Mentor.
Wheeler, Sir Mortimer
_Archaeology from the Earth._ 1956. Penguin.
GEOCHRONOLOGY AND THE ICE AGE
(Two general books. Some Pleistocene geologists disagree with Zeuners
interpretation of the dating evidence, but their points of view appear
in professional journals, in articles too cumbersome to list here.)
Flint, R. F.
_Glacial Geology and the Pleistocene Epoch._ 1947. John Wiley
and Sons.
Zeuner, F. E.
_Dating the Past._ 1952 (3rd ed.). Methuen and Co.
FOSSIL MEN AND RACE
(The points of view of physical anthropologists and human
paleontologists are changing very quickly. Two of the different points
of view are listed here.)
Clark, W. E. Le Gros
_History of the Primates._ 1956 (5th ed.). British Museum
(Natural History). (Also in Phoenix edition, 1957.)
Howells, W. W.
_Mankind So Far._ 1944. Doubleday, Doran.
GENERAL ANTHROPOLOGY
(These are standard texts not absolutely up to date in every detail, or
interpretative essays concerned with cultural change through time as
well as in space.)
Kroeber, A. L.
_Anthropology._ 1948. Harcourt, Brace.
Linton, Ralph
_The Tree of Culture._ 1955. Alfred A. Knopf, Inc.
Redfield, Robert
_The Primitive World and Its Transformations._ 1953. Cornell
University Press.
Steward, Julian H.
_Theory of Culture Change._ 1955. University of Illinois Press.
White, Leslie
_The Science of Culture._ 1949. Farrar, Strauss.
GENERAL PREHISTORY
(A sampling of the more useful and current standard works in English.)
Childe, V. Gordon
_The Dawn of European Civilization._ 1957. Kegan Paul, Trench,
Trubner.
_Prehistoric Migrations in Europe._ 1950. Instituttet for
Sammenlignende Kulturforskning.
Clark, Grahame
_Archaeology and Society._ 1957. Harvard University Press.
Clark, J. G. D.
_Prehistoric Europe: The Economic Basis._ 1952. Methuen and Co.
Garrod, D. A. E.
_Environment, Tools, and Man._ 1946. Cambridge University
Press.
Movius, Hallam L., Jr.
Old World Prehistory: Paleolithic in _Anthropology Today_.
Kroeber, A. L., ed. 1953. University of Chicago Press.
Oakley, Kenneth P.
_Man the Tool-Maker._ 1956. British Museum (Natural History).
(Also in Phoenix edition, 1957.)
Piggott, Stuart
_British Prehistory._ 1949. Oxford University Press.
Pittioni, Richard
_Die Urgeschichtlichen Grundlagen der Europischen Kultur._
1949. Deuticke. (A single book which does attempt to cover the
whole range of European prehistory to ca. 1 A.D.)
THE NEAR EAST
Adams, Robert M.
Developmental Stages in Ancient Mesopotamia, _in_ Steward,
Julian, _et al_, _Irrigation Civilizations: A Comparative
Study_. 1955. Pan American Union.
Braidwood, Robert J.
_The Near East and the Foundations for Civilization._ 1952.
University of Oregon.
Childe, V. Gordon
_New Light on the Most Ancient East._ 1952. Oriental Dept.,
Routledge and Kegan Paul.
Frankfort, Henri
_The Birth of Civilization in the Near East._ 1951. University
of Indiana Press. (Also in Anchor edition, 1956.)
Pallis, Svend A.
_The Antiquity of Iraq._ 1956. Munksgaard.
Wilson, John A.
_The Burden of Egypt._ 1951. University of Chicago Press. (Also
in Phoenix edition, called _The Culture of Ancient Egypt_,
1956.)
HOW DIGGING IS DONE
Braidwood, Linda
_Digging beyond the Tigris._ 1953. Schuman, New York.
Wheeler, Sir Mortimer
_Archaeology from the Earth._ 1954. Oxford, London.
Index
Abbevillian, 48;
core-biface tool, 44, 48
Acheulean, 48, 60
Acheuleo-Levalloisian, 63
Acheuleo-Mousterian, 63
Adams, R. M., 106
Adzes, 45
Africa, east, 67, 89;
north, 70, 89;
south, 22, 25, 34, 40, 67
Agriculture, incipient, in England, 140;
in Near East, 123
Ain Hanech, 48
Amber, taken from Baltic to Greece, 167
American Indians, 90, 142
Anatolia, used as route to Europe, 138
Animals, in caves, 54, 64;
in cave art, 85
Antevs, Ernst, 19
Anyathian, 47
Archeological interpretation, 8
Archeology, defined, 8
Architecture, at Jarmo, 128;
at Jericho, 133
Arrow, points, 94;
shaft straightener, 83
Art, in caves, 84;
East Spanish, 85;
figurines, 84;
Franco-Cantabrian, 84, 85;
movable (engravings, modeling, scratchings), 83;
painting, 83;
sculpture, 83
Asia, western, 67
Assemblage, defined, 13, 14;
European, 94;
Jarmo, 129;
Maglemosian, 94;
Natufian, 113
Aterian, industry, 67;
point, 89
Australopithecinae, 24
Australopithecine, 25, 26
Awls, 77
Axes, 62, 94
Ax-heads, 15
Azilian, 97
Aztecs, 145
Baghouz, 152
Bakun, 134
Baltic sea, 93
Banana, 107
Barley, wild, 108
Barrow, 141
Battle-axe folk, 164;
assemblage, 164
Beads, 80;
bone, 114
Beaker folk, 164;
assemblage, 164-165
Bear, in cave art, 85;
cult, 68
Belgium, 94
Belt cave, 126
Bering Strait, used as route to New World, 98
Bison, in cave art, 85
Blade, awl, 77;
backed, 75;
blade-core, 71;
end-scraper, 77;
stone, defined, 71;
strangulated (notched), 76;
tanged point, 76;
tools, 71, 75-80, 90;
tool tradition, 70
Boar, wild, in cave art, 85
Bogs, source of archeological materials, 94
Bolas, 54
Bordes, Franois, 62
Borer, 77
Boskop skull, 34
Boyd, William C., 35
Bracelets, 118
Brain, development of, 24
Breadfruit, 107
Breasted, James H., 107
Brick, at Jericho, 133
Britain, 94;
late prehistory, 163-175;
invaders, 173
Broch, 172
Buffalo, in China, 54;
killed by stampede, 86
Burials, 66, 86;
in henges, 164;
in urns, 168
Burins, 75
Burma, 90
Byblos, 134
Camel, 54
Cannibalism, 55
Cattle, wild, 85, 112;
in cave art, 85;
domesticated, 15;
at Skara Brae, 142
Caucasoids, 34
Cave men, 29
Caves, 62;
art in, 84
Celts, 170
Chariot, 160
Chicken, domestication of, 107
Chiefs, in food-gathering groups, 68
Childe, V. Gordon, 8
China, 136
Choukoutien, 28, 35
Choukoutienian, 47
Civilization, beginnings, 144, 149, 157;
meaning of, 144
Clactonian, 45, 47
Clay, used in modeling, 128;
baked, used for tools, 153
Club-heads, 82, 94
Colonization, in America, 142;
in Europe, 142
Combe Capelle, 30
Combe Capelle-Brnn group, 34
Commont, Victor, 51
Coon, Carlton S., 73
Copper, 134
Corn, in America, 145
Corrals for cattle, 140
Cradle of mankind, 136
Cremation, 167
Crete, 162
Cro-Magnon, 30, 34
Cultivation, incipient, 105, 109, 111
Culture, change, 99;
characteristics, defined, 38, 49;
prehistoric, 39
Danube Valley, used as route from Asia, 138
Dates, 153
Deer, 54, 96
Dog, domesticated, 96
Domestication, of animals, 100, 105, 107;
of plants, 100
Dragon teeth fossils in China, 28
Drill, 77
Dubois, Eugene, 26
Early Dynastic Period, Mesopotamia, 147
East Spanish art, 72, 85
Egypt, 70, 126
Ehringsdorf, 31
Elephant, 54
Emiliani, Cesare, 18
Emiran flake point, 73
England, 163-168;
prehistoric, 19, 40;
farmers in, 140
Eoanthropus dawsoni, 29
Eoliths, 41
Erich, 152
Eridu, 152
Euphrates River, floods in, 148
Europe, cave dwellings, 58;
at end of Ice Age, 93;
early farmers, 140;
glaciers in, 40;
huts in, 86;
routes into, 137-140;
spread of food-production to, 136
Far East, 69, 90
Farmers, 103
Fauresmith industry, 67
Fayum, 135;
radiocarbon date, 146
Fertile Crescent, 107, 146
Figurines, Venus, 84;
at Jarmo, 128;
at Ubaid, 153
Fire, used by Peking man, 54
First Dynasty, Egypt, 147
Fish-hooks, 80, 94
Fishing, 80;
by food-producers, 122
Fish-lines, 80
Fish spears, 94
Flint industry, 127
Fontchevade, 32, 56, 58
Food-collecting, 104, 121;
end of, 104
Food-gatherers, 53, 176
Food-gathering, 99, 104;
in Old World, 104;
stages of, 104
Food-producers, 176
Food-producing economy, 122;
in America, 145;
in Asia, 105
Food-producing revolution, 99, 105;
causes of, 101;
preconditions for, 100
Food-production, beginnings of, 99;
carried to Europe, 110
Food-vessel folk, 164
Forest folk, 97, 98, 104, 110
Fox, Sir Cyril, 174
France, caves in, 56
Galley Hill (fossil type), 29
Garrod, D. A., 73
Gazelle, 114
Germany, 94
Ghassul, 156
Glaciers, 18, 30;
destruction by, 40
Goat, wild, 108;
domesticated, 128
Grain, first planted, 20
Graves, passage, 141;
gallery, 141
Greece, civilization in, 163;
as route to western Europe, 138;
towns in, 162
Grimaldi skeletons, 34
Hackberry seeds used as food, 55
Halaf, 151;
assemblage, 151
Hallstatt, tradition, 169
Hand, development of, 24, 25
Hand adzes, 46
Hand axes, 44
Harpoons, antler, 83, 94;
bone, 82, 94
Hassuna, 131;
assemblage, 131, 132
Heidelberg, fossil type, 28
Hill-forts, in England, 171;
in Scotland, 172
Hilly flanks of Near East, 107, 108, 125, 131, 146, 147
History, beginning of, 7, 17
Hoes, 112
Holland, 164
Homo sapiens, 32
Hooton, E. A., 34
Horse, 112;
wild, in cave art, 85;
in China, 54
Hotu cave, 126
Houses, 122;
at Jarmo, 128;
at Halaf, 151
Howe, Bruce, 116
Howell, F. Clark, 30
Hunting, 93
Ice Age, in Asia, 99;
beginning of, 18;
glaciers in, 41;
last glaciation, 93
Incas, 145
India, 90, 136
Industrialization, 178
Industry, blade-tool, 88;
defined, 58;
ground stone, 94
Internationalism, 162
Iran, 107, 147
Iraq, 107, 124, 127, 136, 147
Iron, introduction of, 170
Irrigation, 123, 149, 155
Italy, 138
Jacobsen, T. J., 157
Jarmo, 109, 126, 128, 130;
assemblage, 129
Java, 23, 29
Java man, 26, 27, 29
Jefferson, Thomas, 11
Jericho, 119, 133
Judaidah, 134
Kafuan, 48
Kanam, 23, 36
Karim Shahir, 116-119, 124;
assemblage, 116, 117
Keith, Sir Arthur, 33
Kelley, Harper, 51
Kharga, 126
Khartoum, 136
Knives, 80
Krogman, W. M., 3, 25
Lamps, 85
Land bridges in Mediterranean, 19
La Tne phase, 170
Laurel leaf point, 78, 89
Leakey, L. S. B., 40
Le Moustier, 57
Levalloisian, 47, 61, 62
Levalloiso-Mousterian, 47, 63
Little Woodbury, 170
Magic, used by hunters, 123
Maglemosian, assemblage, 94, 95;
folk, 98
Makapan, 40
Mammoth, 93;
in cave art, 85
Man-apes, 26
Mango, 107
Mankind, age, 17
Maringer, J., 45
Markets, 155
Marston, A. T., 11
Mathiassen, T., 97
McCown, T. D., 33
Meganthropus, 26, 27, 36
Men, defined, 25;
modern, 32
Merimde, 135
Mersin, 133
Metal-workers, 160, 163, 167, 172
Micoquian, 48, 60
Microliths, 87;
at Jarmo, 130;
lunates, 87;
trapezoids, 87;
triangles, 87
Minerals used as coloring matter, 66
Mine-shafts, 140
Mlefaat, 126, 127
Mongoloids, 29, 90
Mortars, 114, 118, 127
Mounds, how formed, 12
Mount Carmel, 11, 33, 52, 59, 64, 69, 113, 114
Mousterian man, 64
Mousterian tools, 61, 62;
of Acheulean tradition, 62
Movius, H. L., 47
Natufian, animals in, 114;
assemblage, 113, 114, 115;
burials, 114;
date of, 113
Neanderthal man, 29, 30, 31, 56
Near East, beginnings of civilization in, 20, 144;
cave sites, 58;
climate in Ice Age, 99;
Fertile Crescent, 107, 146;
food-production in, 99;
Natufian assemblage in, 113-115;
stone tools, 114
Needles, 80
Negroid, 34
New World, 90
Nile River valley, 102, 134;
floods in, 148
Nuclear area, 106, 110;
in Near East, 107
Obsidian, used for blade tools, 71;
at Jarmo, 130
Ochre, red, with burials, 86
Oldowan, 48
Old World, 67, 70, 90;
continental phases in, 18
Olorgesailie, 40, 51
Ostrich, in China, 54
Ovens, 128
Oxygen isotopes, 18
Paintings in caves, 83
Paleoanthropic man, 50
Palestine, burials, 56;
cave sites, 52;
types of man, 69
Parpallo, 89
Patjitanian, 45, 47
Pebble tools, 42
Peking cave, 54;
animals in, 54
Peking man, 27, 28, 29, 54, 58
Pendants, 80;
bone, 114
Pestle, 114
Peterborough, 141;
assemblage, 141
Pictographic signs, 158
Pig, wild, 108
Piltdown man, 29
Pins, 80
Pithecanthropus, 26, 27, 30, 36
Pleistocene, 18, 25
Plows developed, 123
Points, arrow, 76;
laurel leaf, 78;
shouldered, 78, 79;
split-based bone, 80, 82;
tanged, 76;
willow leaf, 78
Potatoes, in America, 145
Pottery, 122, 130, 156;
decorated, 142;
painted, 131, 151, 152;
Susa style, 156;
in tombs, 141
Prehistory, defined, 7;
range of, 18
Pre-neanderthaloids, 30, 31, 37
Pre-Solutrean point, 89
Pre-Stellenbosch, 48
Proto-Literate assemblage, 157-160
Race, 35;
biological, 36;
pure, 16
Radioactivity, 9, 10
Radioactive carbon dates, 18, 92, 120, 130, 135, 156
Redfield, Robert, 38, 49
Reed, C. A., 128
Reindeer, 94
Rhinoceros, 93;
in cave art, 85
Rhodesian man, 32
Riss glaciation, 58
Rock-shelters, 58;
art in, 85
Saccopastore, 31
Sahara Desert, 34, 102
Samarra, 152;
pottery, 131, 152
Sangoan industry, 67
Sauer, Carl, 136
Sbaikian point, 89
Schliemann, H., 11, 12
Scotland, 171
Scraper, flake, 79;
end-scraper on blade, 77, 78;
keel-shaped, 79, 80, 81
Sculpture in caves, 83
Sebilian III, 126
Shaheinab, 135
Sheep, wild, 108;
at Skara Brae, 142;
in China, 54
Shellfish, 142
Ship, Ubaidian, 153
Sialk, 126, 134;
assemblage, 134
Siberia, 88;
pathway to New World, 98
Sickle, 112, 153;
blade, 113, 130
Silo, 122
Sinanthropus, 27, 30, 35
Skara Brae, 142
Snails used as food, 128
Soan, 47
Solecki, R., 116
Solo (fossil type), 29, 32
Solutrean industry, 77
Spear, shaft, 78;
thrower, 82, 83
Speech, development of organs of, 25
Squash, in America, 145
Steinheim fossil skull, 28
Stillbay industry, 67
Stonehenge, 166
Stratification, in caves, 12, 57;
in sites, 12
Swanscombe (fossil type), 11, 28
Syria, 107
Tabun, 60, 71
Tardenoisian, 97
Taro, 107
Tasa, 135
Tayacian, 47, 59
Teeth, pierced, in beads and pendants, 114
Temples, 123, 155
Tepe Gawra, 156
Ternafine, 29
Teshik Tash, 69
Textiles, 122
Thong-stropper, 80
Tigris River, floods in, 148
Toggle, 80
Tomatoes, in America, 145
Tombs, megalithic, 141
Tool-making, 42, 49
Tool-preparation traditions, 65
Tools, 62;
antler, 80;
blade, 70, 71, 75;
bone, 66;
chopper, 47;
core-biface, 43, 48, 60, 61;
flake, 44, 47, 51, 60, 64;
flint, 80, 127;
ground stone, 68, 127;
handles, 94;
pebble, 42, 43, 48, 53;
use of, 24
Touf (mud wall), 128
Toynbee, A. J., 101
Trade, 130, 155, 162
Traders, 167
Traditions, 15;
blade tool, 70;
definition of, 51;
interpretation of, 49;
tool-making, 42, 48;
chopper-tool, 47;
chopper-chopping tool, 45;
core-biface, 43, 48;
flake, 44, 47;
pebble tool, 42, 48
Tool-making, prehistory of, 42
Turkey, 107, 108
Ubaid, 153;
assemblage, 153-155
Urnfields, 168, 169
Village-farming community era, 105, 119
Wad B, 72
Wadjak, 34
Warka phase, 156;
assemblage, 156
Washburn, Sherwood L., 36
Water buffalo, domestication of, 107
Weidenreich, F., 29, 34
Wessex, 166, 167
Wheat, wild, 108;
partially domesticated, 127
Willow leaf point, 78
Windmill Hill, 138;
assemblage, 138, 140
Witch doctors, 68
Wool, 112;
in garments, 167
Writing, 158;
cuneiform, 158
Wrm I glaciation, 58
Zebu cattle, domestication of, 107
Zeuner, F. E., 73
* * * * * *
Transcribers note:
Punctuation, hyphenation, and spelling were made consistent when a
predominant preference was found in this book; otherwise they were not
changed.
Simple typographical errors were corrected; occasional unbalanced
quotation marks retained.
Ambiguous hyphens at the ends of lines were retained.
Index not checked for proper alphabetization or correct page references.
In the original book, chapter headings were accompanied by
illustrations, sometimes above, sometimes below, and sometimes
adjacent. In this eBook those ilustrations always appear below the
headings.
***END OF THE PROJECT GUTENBERG EBOOK PREHISTORIC MEN***
******* This file should be named 52664-0.txt or 52664-0.zip *******
This and all associated files of various formats will be found in:
http://www.gutenberg.org/dirs/5/2/6/6/52664
Updated editions will replace the previous one--the old editions will
be renamed.
Creating the works from print editions not protected by U.S. copyright
law means that no one owns a United States copyright in these works,
so the Foundation (and you!) can copy and distribute it in the United
States without permission and without paying copyright
royalties. Special rules, set forth in the General Terms of Use part
of this license, apply to copying and distributing Project
Gutenberg-tm electronic works to protect the PROJECT GUTENBERG-tm
concept and trademark. Project Gutenberg is a registered trademark,
and may not be used if you charge for the eBooks, unless you receive
specific permission. If you do not charge anything for copies of this
eBook, complying with the rules is very easy. You may use this eBook
for nearly any purpose such as creation of derivative works, reports,
performances and research. They may be modified and printed and given
away--you may do practically ANYTHING in the United States with eBooks
not protected by U.S. copyright law. Redistribution is subject to the
trademark license, especially commercial redistribution.
START: FULL LICENSE
THE FULL PROJECT GUTENBERG LICENSE
PLEASE READ THIS BEFORE YOU DISTRIBUTE OR USE THIS WORK
To protect the Project Gutenberg-tm mission of promoting the free
distribution of electronic works, by using or distributing this work
(or any other work associated in any way with the phrase "Project
Gutenberg"), you agree to comply with all the terms of the Full
Project Gutenberg-tm License available with this file or online at
www.gutenberg.org/license.
Section 1. General Terms of Use and Redistributing Project
Gutenberg-tm electronic works
1.A. By reading or using any part of this Project Gutenberg-tm
electronic work, you indicate that you have read, understand, agree to
and accept all the terms of this license and intellectual property
(trademark/copyright) agreement. If you do not agree to abide by all
the terms of this agreement, you must cease using and return or
destroy all copies of Project Gutenberg-tm electronic works in your
possession. If you paid a fee for obtaining a copy of or access to a
Project Gutenberg-tm electronic work and you do not agree to be bound
by the terms of this agreement, you may obtain a refund from the
person or entity to whom you paid the fee as set forth in paragraph
1.E.8.
1.B. "Project Gutenberg" is a registered trademark. It may only be
used on or associated in any way with an electronic work by people who
agree to be bound by the terms of this agreement. There are a few
things that you can do with most Project Gutenberg-tm electronic works
even without complying with the full terms of this agreement. See
paragraph 1.C below. There are a lot of things you can do with Project
Gutenberg-tm electronic works if you follow the terms of this
agreement and help preserve free future access to Project Gutenberg-tm
electronic works. See paragraph 1.E below.
1.C. The Project Gutenberg Literary Archive Foundation ("the
Foundation" or PGLAF), owns a compilation copyright in the collection
of Project Gutenberg-tm electronic works. Nearly all the individual
works in the collection are in the public domain in the United
States. If an individual work is unprotected by copyright law in the
United States and you are located in the United States, we do not
claim a right to prevent you from copying, distributing, performing,
displaying or creating derivative works based on the work as long as
all references to Project Gutenberg are removed. Of course, we hope
that you will support the Project Gutenberg-tm mission of promoting
free access to electronic works by freely sharing Project Gutenberg-tm
works in compliance with the terms of this agreement for keeping the
Project Gutenberg-tm name associated with the work. You can easily
comply with the terms of this agreement by keeping this work in the
same format with its attached full Project Gutenberg-tm License when
you share it without charge with others.
1.D. The copyright laws of the place where you are located also govern
what you can do with this work. Copyright laws in most countries are
in a constant state of change. If you are outside the United States,
check the laws of your country in addition to the terms of this
agreement before downloading, copying, displaying, performing,
distributing or creating derivative works based on this work or any
other Project Gutenberg-tm work. The Foundation makes no
representations concerning the copyright status of any work in any
country outside the United States.
1.E. Unless you have removed all references to Project Gutenberg:
1.E.1. The following sentence, with active links to, or other
immediate access to, the full Project Gutenberg-tm License must appear
prominently whenever any copy of a Project Gutenberg-tm work (any work
on which the phrase "Project Gutenberg" appears, or with which the
phrase "Project Gutenberg" is associated) is accessed, displayed,
performed, viewed, copied or distributed:
This eBook is for the use of anyone anywhere in the United States and
most other parts of the world at no cost and with almost no
restrictions whatsoever. You may copy it, give it away or re-use it
under the terms of the Project Gutenberg License included with this
eBook or online at www.gutenberg.org. If you are not located in the
United States, you'll have to check the laws of the country where you
are located before using this ebook.
1.E.2. If an individual Project Gutenberg-tm electronic work is
derived from texts not protected by U.S. copyright law (does not
contain a notice indicating that it is posted with permission of the
copyright holder), the work can be copied and distributed to anyone in
the United States without paying any fees or charges. If you are
redistributing or providing access to a work with the phrase "Project
Gutenberg" associated with or appearing on the work, you must comply
either with the requirements of paragraphs 1.E.1 through 1.E.7 or
obtain permission for the use of the work and the Project Gutenberg-tm
trademark as set forth in paragraphs 1.E.8 or 1.E.9.
1.E.3. If an individual Project Gutenberg-tm electronic work is posted
with the permission of the copyright holder, your use and distribution
must comply with both paragraphs 1.E.1 through 1.E.7 and any
additional terms imposed by the copyright holder. Additional terms
will be linked to the Project Gutenberg-tm License for all works
posted with the permission of the copyright holder found at the
beginning of this work.
1.E.4. Do not unlink or detach or remove the full Project Gutenberg-tm
License terms from this work, or any files containing a part of this
work or any other work associated with Project Gutenberg-tm.
1.E.5. Do not copy, display, perform, distribute or redistribute this
electronic work, or any part of this electronic work, without
prominently displaying the sentence set forth in paragraph 1.E.1 with
active links or immediate access to the full terms of the Project
Gutenberg-tm License.
1.E.6. You may convert to and distribute this work in any binary,
compressed, marked up, nonproprietary or proprietary form, including
any word processing or hypertext form. However, if you provide access
to or distribute copies of a Project Gutenberg-tm work in a format
other than "Plain Vanilla ASCII" or other format used in the official
version posted on the official Project Gutenberg-tm web site
(www.gutenberg.org), you must, at no additional cost, fee or expense
to the user, provide a copy, a means of exporting a copy, or a means
of obtaining a copy upon request, of the work in its original "Plain
Vanilla ASCII" or other form. Any alternate format must include the
full Project Gutenberg-tm License as specified in paragraph 1.E.1.
1.E.7. Do not charge a fee for access to, viewing, displaying,
performing, copying or distributing any Project Gutenberg-tm works
unless you comply with paragraph 1.E.8 or 1.E.9.
1.E.8. You may charge a reasonable fee for copies of or providing
access to or distributing Project Gutenberg-tm electronic works
provided that
* You pay a royalty fee of 20% of the gross profits you derive from
the use of Project Gutenberg-tm works calculated using the method
you already use to calculate your applicable taxes. The fee is owed
to the owner of the Project Gutenberg-tm trademark, but he has
agreed to donate royalties under this paragraph to the Project
Gutenberg Literary Archive Foundation. Royalty payments must be paid
within 60 days following each date on which you prepare (or are
legally required to prepare) your periodic tax returns. Royalty
payments should be clearly marked as such and sent to the Project
Gutenberg Literary Archive Foundation at the address specified in
Section 4, "Information about donations to the Project Gutenberg
Literary Archive Foundation."
* You provide a full refund of any money paid by a user who notifies
you in writing (or by e-mail) within 30 days of receipt that s/he
does not agree to the terms of the full Project Gutenberg-tm
License. You must require such a user to return or destroy all
copies of the works possessed in a physical medium and discontinue
all use of and all access to other copies of Project Gutenberg-tm
works.
* You provide, in accordance with paragraph 1.F.3, a full refund of
any money paid for a work or a replacement copy, if a defect in the
electronic work is discovered and reported to you within 90 days of
receipt of the work.
* You comply with all other terms of this agreement for free
distribution of Project Gutenberg-tm works.
1.E.9. If you wish to charge a fee or distribute a Project
Gutenberg-tm electronic work or group of works on different terms than
are set forth in this agreement, you must obtain permission in writing
from both the Project Gutenberg Literary Archive Foundation and The
Project Gutenberg Trademark LLC, the owner of the Project Gutenberg-tm
trademark. Contact the Foundation as set forth in Section 3 below.
1.F.
1.F.1. Project Gutenberg volunteers and employees expend considerable
effort to identify, do copyright research on, transcribe and proofread
works not protected by U.S. copyright law in creating the Project
Gutenberg-tm collection. Despite these efforts, Project Gutenberg-tm
electronic works, and the medium on which they may be stored, may
contain "Defects," such as, but not limited to, incomplete, inaccurate
or corrupt data, transcription errors, a copyright or other
intellectual property infringement, a defective or damaged disk or
other medium, a computer virus, or computer codes that damage or
cannot be read by your equipment.
1.F.2. LIMITED WARRANTY, DISCLAIMER OF DAMAGES - Except for the "Right
of Replacement or Refund" described in paragraph 1.F.3, the Project
Gutenberg Literary Archive Foundation, the owner of the Project
Gutenberg-tm trademark, and any other party distributing a Project
Gutenberg-tm electronic work under this agreement, disclaim all
liability to you for damages, costs and expenses, including legal
fees. YOU AGREE THAT YOU HAVE NO REMEDIES FOR NEGLIGENCE, STRICT
LIABILITY, BREACH OF WARRANTY OR BREACH OF CONTRACT EXCEPT THOSE
PROVIDED IN PARAGRAPH 1.F.3. YOU AGREE THAT THE FOUNDATION, THE
TRADEMARK OWNER, AND ANY DISTRIBUTOR UNDER THIS AGREEMENT WILL NOT BE
LIABLE TO YOU FOR ACTUAL, DIRECT, INDIRECT, CONSEQUENTIAL, PUNITIVE OR
INCIDENTAL DAMAGES EVEN IF YOU GIVE NOTICE OF THE POSSIBILITY OF SUCH
DAMAGE.
1.F.3. LIMITED RIGHT OF REPLACEMENT OR REFUND - If you discover a
defect in this electronic work within 90 days of receiving it, you can
receive a refund of the money (if any) you paid for it by sending a
written explanation to the person you received the work from. If you
received the work on a physical medium, you must return the medium
with your written explanation. The person or entity that provided you
with the defective work may elect to provide a replacement copy in
lieu of a refund. If you received the work electronically, the person
or entity providing it to you may choose to give you a second
opportunity to receive the work electronically in lieu of a refund. If
the second copy is also defective, you may demand a refund in writing
without further opportunities to fix the problem.
1.F.4. Except for the limited right of replacement or refund set forth
in paragraph 1.F.3, this work is provided to you 'AS-IS', WITH NO
OTHER WARRANTIES OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT
LIMITED TO WARRANTIES OF MERCHANTABILITY OR FITNESS FOR ANY PURPOSE.
1.F.5. Some states do not allow disclaimers of certain implied
warranties or the exclusion or limitation of certain types of
damages. If any disclaimer or limitation set forth in this agreement
violates the law of the state applicable to this agreement, the
agreement shall be interpreted to make the maximum disclaimer or
limitation permitted by the applicable state law. The invalidity or
unenforceability of any provision of this agreement shall not void the
remaining provisions.
1.F.6. INDEMNITY - You agree to indemnify and hold the Foundation, the
trademark owner, any agent or employee of the Foundation, anyone
providing copies of Project Gutenberg-tm electronic works in
accordance with this agreement, and any volunteers associated with the
production, promotion and distribution of Project Gutenberg-tm
electronic works, harmless from all liability, costs and expenses,
including legal fees, that arise directly or indirectly from any of
the following which you do or cause to occur: (a) distribution of this
or any Project Gutenberg-tm work, (b) alteration, modification, or
additions or deletions to any Project Gutenberg-tm work, and (c) any
Defect you cause.
Section 2. Information about the Mission of Project Gutenberg-tm
Project Gutenberg-tm is synonymous with the free distribution of
electronic works in formats readable by the widest variety of
computers including obsolete, old, middle-aged and new computers. It
exists because of the efforts of hundreds of volunteers and donations
from people in all walks of life.
Volunteers and financial support to provide volunteers with the
assistance they need are critical to reaching Project Gutenberg-tm's
goals and ensuring that the Project Gutenberg-tm collection will
remain freely available for generations to come. In 2001, the Project
Gutenberg Literary Archive Foundation was created to provide a secure
and permanent future for Project Gutenberg-tm and future
generations. To learn more about the Project Gutenberg Literary
Archive Foundation and how your efforts and donations can help, see
Sections 3 and 4 and the Foundation information page at
www.gutenberg.org
Section 3. Information about the Project Gutenberg Literary
Archive Foundation
The Project Gutenberg Literary Archive Foundation is a non profit
501(c)(3) educational corporation organized under the laws of the
state of Mississippi and granted tax exempt status by the Internal
Revenue Service. The Foundation's EIN or federal tax identification
number is 64-6221541. Contributions to the Project Gutenberg Literary
Archive Foundation are tax deductible to the full extent permitted by
U.S. federal laws and your state's laws.
The Foundation's principal office is in Fairbanks, Alaska, with the
mailing address: PO Box 750175, Fairbanks, AK 99775, but its
volunteers and employees are scattered throughout numerous
locations. Its business office is located at 809 North 1500 West, Salt
Lake City, UT 84116, (801) 596-1887. Email contact links and up to
date contact information can be found at the Foundation's web site and
official page at www.gutenberg.org/contact
For additional contact information:
Dr. Gregory B. Newby
Chief Executive and Director
[email protected]
Section 4. Information about Donations to the Project Gutenberg
Literary Archive Foundation
Project Gutenberg-tm depends upon and cannot survive without wide
spread public support and donations to carry out its mission of
increasing the number of public domain and licensed works that can be
freely distributed in machine readable form accessible by the widest
array of equipment including outdated equipment. Many small donations
($1 to $5,000) are particularly important to maintaining tax exempt
status with the IRS.
The Foundation is committed to complying with the laws regulating
charities and charitable donations in all 50 states of the United
States. Compliance requirements are not uniform and it takes a
considerable effort, much paperwork and many fees to meet and keep up
with these requirements. We do not solicit donations in locations
where we have not received written confirmation of compliance. To SEND
DONATIONS or determine the status of compliance for any particular
state visit www.gutenberg.org/donate
While we cannot and do not solicit contributions from states where we
have not met the solicitation requirements, we know of no prohibition
against accepting unsolicited donations from donors in such states who
approach us with offers to donate.
International donations are gratefully accepted, but we cannot make
any statements concerning tax treatment of donations received from
outside the United States. U.S. laws alone swamp our small staff.
Please check the Project Gutenberg Web pages for current donation
methods and addresses. Donations are accepted in a number of other
ways including checks, online payments and credit card donations. To
donate, please visit: www.gutenberg.org/donate
Section 5. General Information About Project Gutenberg-tm electronic works.
Professor Michael S. Hart was the originator of the Project
Gutenberg-tm concept of a library of electronic works that could be
freely shared with anyone. For forty years, he produced and
distributed Project Gutenberg-tm eBooks with only a loose network of
volunteer support.
Project Gutenberg-tm eBooks are often created from several printed
editions, all of which are confirmed as not protected by copyright in
the U.S. unless a copyright notice is included. Thus, we do not
necessarily keep eBooks in compliance with any particular paper
edition.
Most people start at our Web site which has the main PG search
facility: www.gutenberg.org
This Web site includes information about Project Gutenberg-tm,
including how to make donations to the Project Gutenberg Literary
Archive Foundation, how to help produce our new eBooks, and how to
subscribe to our email newsletter to hear about new eBooks.
| The Project Gutenberg eBook, Prehistoric Men, by Robert J. (Robert John)
Braidwood, Illustrated by Susan T. Richert
This eBook is for the use of anyone anywhere in the United States and most
other parts of the world at no cost and with almost no restrictions
whatsoever. You may copy it, give it away or re-use it under the terms of
the Project Gutenberg License included with this eBook or online at
www.gutenberg.org. If you are not located in the United States, you'll have
to check the laws of the country where you are located before using this ebook.
Title: Prehistoric Men
Author: Robert J. (Robert John) Braidwood
Release Date: July 28, 2016 [eBook #52664]
Language: English
Character set encoding: UTF-8
***START OF THE PROJECT GUTENBERG EBOOK PREHISTORIC MEN***
E-text prepared by Stephen Hutcheson, Dave Morgan, Charlie Howard, and the
Online Distributed Proofreading Team (http://www.pgdp.net)
Note: Project Gutenberg also has an HTML version of this
file which includes the original illustrations.
See 52664-h.htm or 52664-h.zip:
(http://www.gutenberg.org/files/52664/52664-h/52664-h.htm)
or
(http://www.gutenberg.org/files/52664/52664-h.zip)
Transcriber's note:
Some characters might not display in this UTF-8 text
version. If so, the reader should consult the HTML
version referred to above. One example of this might
occur in the second paragraph under "Choppers and
Adze-like Tools", page 46, which contains the phrase
an adze cutting edge is ? shaped. The symbol before
shaped looks like a sharply-italicized sans-serif L.
Devices that cannot display that symbol may substitute
a question mark, a square, or other symbol.
PREHISTORIC MEN
by
ROBERT J. BRAIDWOOD
Research Associate, Old World Prehistory
Professor
Oriental Institute and Department of Anthropology
University of Chicago
Drawings by Susan T. Richert
[Illustration]
Chicago Natural History Museum
Popular Series
Anthropology, Number 37
Third Edition Issued in Co-operation with
The Oriental Institute, The University of Chicago
Edited by Lillian A. Ross
Printed in the United States of America
by Chicago Natural History Museum Press
Copyright 1948, 1951, and 1957 by Chicago Natural History Museum
First edition 1948
Second edition 1951
Third edition 1957
Fourth edition 1959
Preface
[Illustration]
Like the writing of most professional archeologists, mine has been
confined to so-called learned papers. Good, bad, or indifferent, these
papers were in a jargon that only my colleagues and a few advanced
students could understand. Hence, when I was asked to do this little
book, I soon found it extremely difficult to say what I meant in simple
fashion. The style is new to me, but I hope the reader will not find it
forced or pedantic; at least I have done my very best to tell the story
simply and clearly.
Many friends have aided in the preparation of the book. The whimsical
charm of Miss Susan Richerts illustrations add enormously to the
spirit I wanted. She gave freely of her own time on the drawings and
in planning the book with me. My colleagues at the University of
Chicago, especially Professor Wilton M. Krogman (now of the University
of Pennsylvania), and also Mrs. Linda Braidwood, Associate of the
Oriental Institute, and Professors Fay-Cooper Cole and Sol Tax, of
the Department of Anthropology, gave me counsel in matters bearing on
their special fields, and the Department of Anthropology bore some of
the expense of the illustrations. From Mrs. Irma Hunter and Mr. Arnold
Maremont, who are not archeologists at all and have only an intelligent
laymans notion of archeology, I had sound advice on how best to tell
the story. I am deeply indebted to all these friends.
While I was preparing the second edition, I had the great fortune
to be able to rework the third chapter with Professor Sherwood L.
Washburn, now of the Department of Anthropology of the University of
California, and the fourth, fifth, and sixth chapters with Professor
Hallum L. Movius, Jr., of the Peabody Museum, Harvard University. The
book has gained greatly in accuracy thereby. In matters of dating,
Professor Movius and the indications of Professor W. F. Libbys Carbon
14 chronology project have both encouraged me to choose the lowest
dates now current for the events of the Pleistocene Ice Age. There is
still no certain way of fixing a direct chronology for most of the
Pleistocene, but Professor Libbys method appears very promising for
its end range and for proto-historic dates. In any case, this book
names periods, and new dates may be written in against mine, if new
and better dating systems appear.
I wish to thank Dr. Clifford C. Gregg, Director of Chicago Natural
History Museum, for the opportunity to publish this book. My old
friend, Dr. Paul S. Martin, Chief Curator in the Department of
Anthropology, asked me to undertake the job and inspired me to complete
it. I am also indebted to Miss Lillian A. Ross, Associate Editor of
Scientific Publications, and to Mr. George I. Quimby, Curator of
Exhibits in Anthropology, for all the time they have given me in
getting the manuscript into proper shape.
ROBERT J. BRAIDWOOD
_June 15, 1950_
Preface to the Third Edition
In preparing the enlarged third edition, many of the above mentioned
friends have again helped me. I have picked the brains of Professor F.
Clark Howell of the Department of Anthropology of the University of
Chicago in reworking the earlier chapters, and he was very patient in
the matter, which I sincerely appreciate.
All of Mrs. Susan Richert Allens original drawings appear, but a few
necessary corrections have been made in some of the charts and some new
drawings have been added by Mr. John Pfiffner, Staff Artist, Chicago
Natural History Museum.
ROBERT J. BRAIDWOOD
_March 1, 1959_
Contents
PAGE
How We Learn about Prehistoric Men 7
The Changing World in Which Prehistoric Men Lived 17
Prehistoric Men Themselves 22
Cultural Beginnings 38
More Evidence of Culture 56
Early Moderns 70
End and Prelude 92
The First Revolution 121
The Conquest of Civilization 144
End of Prehistory 162
Summary 176
List of Books 180
Index 184
HOW WE LEARN about Prehistoric Men
[Illustration]
Prehistory means the time before written history began. Actually, more
than 99 per cent of mans story is prehistory. Man is at least half a
million years old, but he did not begin to write history (or to write
anything) until about 5,000 years ago.
The men who lived in prehistoric times left us no history books, but
they did unintentionally leave a record of their presence and their way
of life. This record is studied and interpreted by different kinds of
scientists.
SCIENTISTS WHO FIND OUT ABOUT PREHISTORIC MEN
The scientists who study the bones and teeth and any other parts
they find of the bodies of prehistoric men, are called _physical
anthropologists_. Physical anthropologists are trained, much like
doctors, to know all about the human body. They study living people,
too; they know more about the biological facts of human races than
anybody else. If the police find a badly decayed body in a trunk,
they ask a physical anthropologist to tell them what the person
originally looked like. The physical anthropologists who specialize in
prehistoric men work with fossils, so they are sometimes called _human
paleontologists_.
ARCHEOLOGISTS
There is a kind of scientist who studies the things that prehistoric
men made and did. Such a scientist is called an _archeologist_. It is
the archeologists business to look for the stone and metal tools, the
pottery, the graves, and the caves or huts of the men who lived before
history began.
But there is more to archeology than just looking for things. In
Professor V. Gordon Childes words, archeology furnishes a sort of
history of human activity, provided always that the actions have
produced concrete results and left recognizable material traces. You
will see that there are at least three points in what Childe says:
1. The archeologists have to find the traces of things left behind by
ancient man, and
2. Only a few objects may be found, for most of these were probably
too soft or too breakable to last through the years. However,
3. The archeologist must use whatever he can find to tell a story--to
make a sort of history--from the objects and living-places and
graves that have escaped destruction.
What I mean is this: Let us say you are walking through a dump yard,
and you find a rusty old spark plug. If you want to think about what
the spark plug means, you quickly remember that it is a part of an
automobile motor. This tells you something about the man who threw
the spark plug on the dump. He either had an automobile, or he knew
or lived near someone who did. He cant have lived so very long ago,
youll remember, because spark plugs and automobiles are only about
sixty years old.
When you think about the old spark plug in this way you have
just been making the beginnings of what we call an archeological
_interpretation_; you have been making the spark plug tell a story.
It is the same way with the man-made things we archeologists find
and put in museums. Usually, only a few of these objects are pretty
to look at; but each of them has some sort of story to tell. Making
the interpretation of his finds is the most important part of the
archeologists job. It is the way he gets at the sort of history of
human activity which is expected of archeology.
SOME OTHER SCIENTISTS
There are many other scientists who help the archeologist and the
physical anthropologist find out about prehistoric men. The geologists
help us tell the age of the rocks or caves or gravel beds in which
human bones or man-made objects are found. There are other scientists
with names which all begin with paleo (the Greek word for old). The
_paleontologists_ study fossil animals. There are also, for example,
such scientists as _paleobotanists_ and _paleoclimatologists_, who
study ancient plants and climates. These scientists help us to know
the kinds of animals and plants that were living in prehistoric times
and so could be used for food by ancient man; what the weather was
like; and whether there were glaciers. Also, when I tell you that
prehistoric men did not appear until long after the great dinosaurs had
disappeared, I go on the say-so of the paleontologists. They know that
fossils of men and of dinosaurs are not found in the same geological
period. The dinosaur fossils come in early periods, the fossils of men
much later.
Since World War II even the atomic scientists have been helping the
archeologists. By testing the amount of radioactivity left in charcoal,
wood, or other vegetable matter obtained from archeological sites, they
have been able to date the sites. Shell has been used also, and even
the hair of Egyptian mummies. The dates of geological and climatic
events have also been discovered. Some of this work has been done from
drillings taken from the bottom of the sea.
This dating by radioactivity has considerably shortened the dates which
the archeologists used to give. If you find that some of the dates
I give here are more recent than the dates you see in other books
on prehistory, it is because I am using one of the new lower dating
systems.
[Illustration: RADIOCARBON CHART
The rate of disappearance of radioactivity as time passes.[1]]
[1] It is important that the limitations of the radioactive carbon
dating system be held in mind. As the statistics involved in
the system are used, there are two chances in three that the
date of the sample falls within the range given as plus or
minus an added number of years. For example, the date for the
Jarmo village (see chart), given as 6750 200 B.C., really
means that there are only two chances in three that the real
date of the charcoal sampled fell between 6950 and 6550 B.C.
We have also begun to suspect that there are ways in which the
samples themselves may have become contaminated, either on
the early or on the late side. We now tend to be suspicious of
single radioactive carbon determinations, or of determinations
from one site alone. But as a fabric of consistent
determinations for several or more sites of one archeological
period, we gain confidence in the dates.
HOW THE SCIENTISTS FIND OUT
So far, this chapter has been mainly about the people who find out
about prehistoric men. We also need a word about _how_ they find out.
All our finds came by accident until about a hundred years ago. Men
digging wells, or digging in caves for fertilizer, often turned up
ancient swords or pots or stone arrowheads. People also found some odd
pieces of stone that didnt look like natural forms, but they also
didnt look like any known tool. As a result, the people who found them
gave them queer names; for example, thunderbolts. The people thought
the strange stones came to earth as bolts of lightning. We know now
that these strange stones were prehistoric stone tools.
Many important finds still come to us by accident. In 1935, a British
dentist, A. T. Marston, found the first of two fragments of a very
important fossil human skull, in a gravel pit at Swanscombe, on the
River Thames, England. He had to wait nine months, until the face of
the gravel pit had been dug eight yards farther back, before the second
fragment appeared. They fitted! Then, twenty years later, still another
piece appeared. In 1928 workmen who were blasting out rock for the
breakwater in the port of Haifa began to notice flint tools. Thus the
story of cave men on Mount Carmel, in Palestine, began to be known.
Planned archeological digging is only about a century old. Even before
this, however, a few men realized the significance of objects they dug
from the ground; one of these early archeologists was our own Thomas
Jefferson. The first real mound-digger was a German grocers clerk,
Heinrich Schliemann. Schliemann made a fortune as a merchant, first
in Europe and then in the California gold-rush of 1849. He became an
American citizen. Then he retired and had both money and time to test
an old idea of his. He believed that the heroes of ancient Troy and
Mycenae were once real Trojans and Greeks. He proved it by going to
Turkey and Greece and digging up the remains of both cities.
Schliemann had the great good fortune to find rich and spectacular
treasures, and he also had the common sense to keep notes and make
descriptions of what he found. He proved beyond doubt that many ancient
city mounds can be _stratified_. This means that there may be the
remains of many towns in a mound, one above another, like layers in a
cake.
You might like to have an idea of how mounds come to be in layers.
The original settlers may have chosen the spot because it had a good
spring and there were good fertile lands nearby, or perhaps because
it was close to some road or river or harbor. These settlers probably
built their town of stone and mud-brick. Finally, something would have
happened to the town--a flood, or a burning, or a raid by enemies--and
the walls of the houses would have fallen in or would have melted down
as mud in the rain. Nothing would have remained but the mud and debris
of a low mound of _one_ layer.
The second settlers would have wanted the spot for the same reasons
the first settlers did--good water, land, and roads. Also, the second
settlers would have found a nice low mound to build their houses on,
a protection from floods. But again, something would finally have
happened to the second town, and the walls of _its_ houses would have
come tumbling down. This makes the _second_ layer. And so on....
In Syria I once had the good fortune to dig on a large mound that had
no less than fifteen layers. Also, most of the layers were thick, and
there were signs of rebuilding and repairs within each layer. The mound
was more than a hundred feet high. In each layer, the building material
used had been a soft, unbaked mud-brick, and most of the debris
consisted of fallen or rain-melted mud from these mud-bricks.
This idea of _stratification_, like the cake layers, was already a
familiar one to the geologists by Schliemanns time. They could show
that their lowest layer of rock was oldest or earliest, and that the
overlying layers became more recent as one moved upward. Schliemanns
digging proved the same thing at Troy. His first (lowest and earliest)
city had at least nine layers above it; he thought that the second
layer contained the remains of Homers Troy. We now know that Homeric
Troy was layer VIIa from the bottom; also, we count eleven layers or
sub-layers in total.
Schliemanns work marks the beginnings of modern archeology. Scholars
soon set out to dig on ancient sites, from Egypt to Central America.
ARCHEOLOGICAL INFORMATION
As time went on, the study of archeological materials--found either
by accident or by digging on purpose--began to show certain things.
Archeologists began to get ideas as to the kinds of objects that
belonged together. If you compared a mail-order catalogue of 1890 with
one of today, you would see a lot of differences. If you really studied
the two catalogues hard, you would also begin to see that certain
objects go together. Horseshoes and metal buggy tires and pieces of
harness would begin to fit into a picture with certain kinds of coal
stoves and furniture and china dishes and kerosene lamps. Our friend
the spark plug, and radios and electric refrigerators and light bulbs
would fit into a picture with different kinds of furniture and dishes
and tools. You wont be old enough to remember the kind of hats that
women wore in 1890, but youve probably seen pictures of them, and you
know very well they couldnt be worn with the fashions of today.
This is one of the ways that archeologists study their materials.
The various tools and weapons and jewelry, the pottery, the kinds
of houses, and even the ways of burying the dead tend to fit into
pictures. Some archeologists call all of the things that go together to
make such a picture an _assemblage_. The assemblage of the first layer
of Schliemanns Troy was as different from that of the seventh layer as
our 1900 mail-order catalogue is from the one of today.
The archeologists who came after Schliemann began to notice other
things and to compare them with occurrences in modern times. The
idea that people will buy better mousetraps goes back into very
ancient times. Today, if we make good automobiles or radios, we can
sell some of them in Turkey or even in Timbuktu. This means that a
few present-day types of American automobiles and radios form part
of present-day assemblages in both Turkey and Timbuktu. The total
present-day assemblage of Turkey is quite different from that of
Timbuktu or that of America, but they have at least some automobiles
and some radios in common.
Now these automobiles and radios will eventually wear out. Let us
suppose we could go to some remote part of Turkey or to Timbuktu in a
dream. We dont know what the date is, in our dream, but we see all
sorts of strange things and ways of living in both places. Nobody
tells us what the date is. But suddenly we see a 1936 Ford; so we
know that in our dream it has to be at least the year 1936, and only
as many years after that as we could reasonably expect a Ford to keep
in running order. The Ford would probably break down in twenty years
time, so the Turkish or Timbuktu assemblage were seeing in our dream
has to date at about A.D. 1936-56.
Archeologists not only date their ancient materials in this way; they
also see over what distances and between which peoples trading was
done. It turns out that there was a good deal of trading in ancient
times, probably all on a barter and exchange basis.
EVERYTHING BEGINS TO FIT TOGETHER
Now we need to pull these ideas all together and see the complicated
structure the archeologists can build with their materials.
Even the earliest archeologists soon found that there was a very long
range of prehistoric time which would yield only very simple things.
For this very long early part of prehistory, there was little to be
found but the flint tools which wandering, hunting and gathering
people made, and the bones of the wild animals they ate. Toward the
end of prehistoric time there was a general settling down with the
coming of agriculture, and all sorts of new things began to be made.
Archeologists soon got a general notion of what ought to appear with
what. Thus, it would upset a French prehistorian digging at the bottom
of a very early cave if he found a fine bronze sword, just as much as
it would upset him if he found a beer bottle. The people of his very
early cave layer simply could not have made bronze swords, which came
later, just as do beer bottles. Some accidental disturbance of the
layers of his cave must have happened.
With any luck, archeologists do their digging in a layered, stratified
site. They find the remains of everything that would last through
time, in several different layers. They know that the assemblage in
the bottom layer was laid down earlier than the assemblage in the next
layer above, and so on up to the topmost layer, which is the latest.
They look at the results of other digs and find that some other
archeologist 900 miles away has found ax-heads in his lowest layer,
exactly like the ax-heads of their fifth layer. This means that their
fifth layer must have been lived in at about the same time as was the
first layer in the site 200 miles away. It also may mean that the
people who lived in the two layers knew and traded with each other. Or
it could mean that they didnt necessarily know each other, but simply
that both traded with a third group at about the same time.
You can see that the more we dig and find, the more clearly the main
facts begin to stand out. We begin to be more sure of which people
lived at the same time, which earlier and which later. We begin to
know who traded with whom, and which peoples seemed to live off by
themselves. We begin to find enough skeletons in burials so that the
physical anthropologists can tell us what the people looked like. We
get animal bones, and a paleontologist may tell us they are all bones
of wild animals; or he may tell us that some or most of the bones are
those of domesticated animals, for instance, sheep or cattle, and
therefore the people must have kept herds.
More important than anything else--as our structure grows more
complicated and our materials increase--is the fact that a sort
of history of human activity does begin to appear. The habits or
traditions that men formed in the making of their tools and in the
ways they did things, begin to stand out for us. How characteristic
were these habits and traditions? What areas did they spread over?
How long did they last? We watch the different tools and the traces
of the way things were done--how the burials were arranged, what
the living-places were like, and so on. We wonder about the people
themselves, for the traces of habits and traditions are useful to us
only as clues to the men who once had them. So we ask the physical
anthropologists about the skeletons that we found in the burials. The
physical anthropologists tell us about the anatomy and the similarities
and differences which the skeletons show when compared with other
skeletons. The physical anthropologists are even working on a
method--chemical tests of the bones--that will enable them to discover
what the blood-type may have been. One thing is sure. We have never
found a group of skeletons so absolutely similar among themselves--so
cast from a single mould, so to speak--that we could claim to have a
pure race. I am sure we never shall.
We become particularly interested in any signs of change--when new
materials and tool types and ways of doing things replace old ones. We
watch for signs of social change and progress in one way or another.
We must do all this without one word of written history to aid us.
Everything we are concerned with goes back to the time _before_ men
learned to write. That is the prehistorians job--to find out what
happened before history began.
THE CHANGING WORLD in which Prehistoric Men Lived
[Illustration]
Mankind, well say, is at least a half million years old. It is very
hard to understand how long a time half a million years really is.
If we were to compare this whole length of time to one day, wed get
something like this: The present time is midnight, and Jesus was
born just five minutes and thirty-six seconds ago. Earliest history
began less than fifteen minutes ago. Everything before 11:45 was in
prehistoric time.
Or maybe we can grasp the length of time better in terms of
generations. As you know, primitive peoples tend to marry and have
children rather early in life. So suppose we say that twenty years
will make an average generation. At this rate there would be 25,000
generations in a half-million years. But our United States is much less
than ten generations old, twenty-five generations take us back before
the time of Columbus, Julius Caesar was alive just 100 generations ago,
David was king of Israel less than 150 generations ago, 250 generations
take us back to the beginning of written history. And there were 24,750
generations of men before written history began!
I should probably tell you that there is a new method of prehistoric
dating which would cut the earliest dates in my reckoning almost
in half. Dr. Cesare Emiliani, combining radioactive (C14) and
chemical (oxygen isotope) methods in the study of deep-sea borings,
has developed a system which would lower the total range of human
prehistory to about 300,000 years. The system is still too new to have
had general examination and testing. Hence, I have not used it in this
book; it would mainly affect the dates earlier than 25,000 years ago.
CHANGES IN ENVIRONMENT
The earth probably hasnt changed much in the last 5,000 years (250
generations). Men have built things on its surface and dug into it and
drawn boundaries on maps of it, but the places where rivers, lakes,
seas, and mountains now stand have changed very little.
In earlier times the earth looked very different. Geologists call the
last great geological period the _Pleistocene_. It began somewhere
between a half million and a million years ago, and was a time of great
changes. Sometimes we call it the Ice Age, for in the Pleistocene
there were at least three or four times when large areas of earth
were covered with glaciers. The reason for my uncertainty is that
while there seem to have been four major mountain or alpine phases of
glaciation, there may only have been three general continental phases
in the Old World.[2]
[2] This is a complicated affair and I do not want to bother you
with its details. Both the alpine and the continental ice sheets
seem to have had minor fluctuations during their _main_ phases,
and the advances of the later phases destroyed many of the
traces of the earlier phases. The general textbooks have tended
to follow the names and numbers established for the Alps early
in this century by two German geologists. I will not bother you
with the names, but there were _four_ major phases. It is the
second of these alpine phases which seems to fit the traces of
the earliest of the great continental glaciations. In this book,
I will use the four-part system, since it is the most familiar,
but will add the word _alpine_ so you may remember to make the
transition to the continental system if you wish to do so.
Glaciers are great sheets of ice, sometimes over a thousand feet
thick, which are now known only in Greenland and Antarctica and in
high mountains. During several of the glacial periods in the Ice Age,
the glaciers covered most of Canada and the northern United States and
reached down to southern England and France in Europe. Smaller ice
sheets sat like caps on the Rockies, the Alps, and the Himalayas. The
continental glaciation only happened north of the equator, however, so
remember that Ice Age is only half true.
As you know, the amount of water on and about the earth does not vary.
These large glaciers contained millions of tons of water frozen into
ice. Because so much water was frozen and contained in the glaciers,
the water level of lakes and oceans was lowered. Flooded areas were
drained and appeared as dry land. There were times in the Ice Age when
there was no English Channel, so that England was not an island, and a
land bridge at the Dardanelles probably divided the Mediterranean from
the Black Sea.
A very important thing for people living during the time of a
glaciation was the region adjacent to the glacier. They could not, of
course, live on the ice itself. The questions would be how close could
they live to it, and how would they have had to change their way of
life to do so.
GLACIERS CHANGE THE WEATHER
Great sheets of ice change the weather. When the front of a glacier
stood at Milwaukee, the weather must have been bitterly cold in
Chicago. The climate of the whole world would have been different, and
you can see how animals and men would have been forced to move from one
place to another in search of food and warmth.
On the other hand, it looks as if only a minor proportion of the whole
Ice Age was really taken up by times of glaciation. In between came
the _interglacial_ periods. During these times the climate around
Chicago was as warm as it is now, and sometimes even warmer. It may
interest you to know that the last great glacier melted away less than
10,000 years ago. Professor Ernst Antevs thinks we may be living in an
interglacial period and that the Ice Age may not be over yet. So if you
want to make a killing in real estate for your several hundred times
great-grandchildren, you might buy some land in the Arizona desert or
the Sahara.
We do not yet know just why the glaciers appeared and disappeared, as
they did. It surely had something to do with an increase in rainfall
and a fall in temperature. It probably also had to do with a general
tendency for the land to rise at the beginning of the Pleistocene. We
know there was some mountain-building at that time. Hence, rain-bearing
winds nourished the rising and cooler uplands with snow. An increase
in all three of these factors--if they came together--would only have
needed to be slight. But exactly why this happened we do not know.
The reason I tell you about the glaciers is simply to remind you of the
changing world in which prehistoric men lived. Their surroundings--the
animals and plants they used for food, and the weather they had to
protect themselves from--were always changing. On the other hand, this
change happened over so long a period of time and was so slow that
individual people could not have noticed it. Glaciers, about which they
probably knew nothing, moved in hundreds of miles to the north of them.
The people must simply have wandered ever more southward in search
of the plants and animals on which they lived. Or some men may have
stayed where they were and learned to hunt different animals and eat
different foods. Prehistoric men had to keep adapting themselves to new
environments and those who were most adaptive were most successful.
OTHER CHANGES
Changes took place in the men themselves as well as in the ways they
lived. As time went on, they made better tools and weapons. Then, too,
we begin to find signs of how they started thinking of other things
than food and the tools to get it with. We find that they painted on
the walls of caves, and decorated their tools; we find that they buried
their dead.
At about the time when the last great glacier was finally melting away,
men in the Near East made the first basic change in human economy.
They began to plant grain, and they learned to raise and herd certain
animals. This meant that they could store food in granaries and on the
hoof against the bad times of the year. This first really basic change
in mans way of living has been called the food-producing revolution.
By the time it happened, a modern kind of climate was beginning. Men
had already grown to look as they do now. Know-how in ways of living
had developed and progressed, slowly but surely, up to a point. It was
impossible for men to go beyond that point if they only hunted and
fished and gathered wild foods. Once the basic change was made--once
the food-producing revolution became effective--technology leaped ahead
and civilization and written history soon began.
Prehistoric Men THEMSELVES
[Illustration]
DO WE KNOW WHERE MAN ORIGINATED?
For a long time some scientists thought the cradle of mankind was in
central Asia. Other scientists insisted it was in Africa, and still
others said it might have been in Europe. Actually, we dont know
where it was. We dont even know that there was only _one_ cradle.
If we had to choose a cradle at this moment, we would probably say
Africa. But the southern portions of Asia and Europe may also have been
included in the general area. The scene of the early development of
mankind was certainly the Old World. It is pretty certain men didnt
reach North or South America until almost the end of the Ice Age--had
they done so earlier we would certainly have found some trace of them
by now.
The earliest tools we have yet found come from central and south
Africa. By the dating system Im using, these tools must be over
500,000 years old. There are now reports that a few such early tools
have been found--at the Sterkfontein cave in South Africa--along with
the bones of small fossil men called australopithecines.
Not all scientists would agree that the australopithecines were men,
or would agree that the tools were made by the australopithecines
themselves. For these sticklers, the earliest bones of men come from
the island of Java. The date would be about 450,000 years ago. So far,
we have not yet found the tools which we suppose these earliest men in
the Far East must have made.
Let me say it another way. How old are the earliest traces of men we
now have? Over half a million years. This was a time when the first
alpine glaciation was happening in the north. What has been found so
far? The tools which the men of those times made, in different parts
of Africa. It is now fairly generally agreed that the men who made
the tools were the australopithecines. There is also a more man-like
jawbone at Kanam in Kenya, but its find-spot has been questioned. The
next earliest bones we have were found in Java, and they may be almost
a hundred thousand years younger than the earliest African finds. We
havent yet found the tools of these early Javanese. Our knowledge of
tool-using in Africa spreads quickly as time goes on: soon after the
appearance of tools in the south we shall have them from as far north
as Algeria.
Very soon after the earliest Javanese come the bones of slightly more
developed people in Java, and the jawbone of a man who once lived in
what is now Germany. The same general glacial beds which yielded the
later Javanese bones and the German jawbone also include tools. These
finds come from the time of the second alpine glaciation.
So this is the situation. By the time of the end of the second alpine
or first continental glaciation (say 400,000 years ago) we have traces
of men from the extremes of the more southerly portions of the Old
World--South Africa, eastern Asia, and western Europe. There are also
some traces of men in the middle ground. In fact, Professor Franz
Weidenreich believed that creatures who were the immediate ancestors
of men had already spread over Europe, Africa, and Asia by the time
the Ice Age began. We certainly have no reason to disbelieve this, but
fortunate accidents of discovery have not yet given us the evidence to
prove it.
MEN AND APES
Many people used to get extremely upset at the ill-formed notion
that man descended from the apes. Such words were much more likely
to start fights or monkey trials than the correct notion that all
living animals, including man, ascended or evolved from a single-celled
organism which lived in the primeval seas hundreds of millions of years
ago. Men are mammals, of the order called Primates, and mans living
relatives are the great apes. Men didnt descend from the apes or
apes from men, and mankind must have had much closer relatives who have
since become extinct.
Men stand erect. They also walk and run on their two feet. Apes are
happiest in trees, swinging with their arms from branch to branch.
Few branches of trees will hold the mighty gorilla, although he still
manages to sleep in trees. Apes cant stand really erect in our sense,
and when they have to run on the ground, they use the knuckles of their
hands as well as their feet.
A key group of fossil bones here are the south African
australopithecines. These are called the _Australopithecinae_ or
man-apes or sometimes even ape-men. We do not _know_ that they were
directly ancestral to men but they can hardly have been so to apes.
Presently Ill describe them a bit more. The reason I mention them
here is that while they had brains no larger than those of apes, their
hipbones were enough like ours so that they must have stood erect.
There is no good reason to think they couldnt have walked as we do.
BRAINS, HANDS, AND TOOLS
Whether the australopithecines were our ancestors or not, the proper
ancestors of men must have been able to stand erect and to walk on
their two feet. Three further important things probably were involved,
next, before they could become men proper. These are:
1. The increasing size and development of the brain.
2. The increasing usefulness (specialization) of the thumb and hand.
3. The use of tools.
Nobody knows which of these three is most important, or which came
first. Most probably the growth of all three things was very much
blended together. If you think about each of the things, you will see
what I mean. Unless your hand is more flexible than a paw, and your
thumb will work against (or oppose) your fingers, you cant hold a tool
very well. But you wouldnt get the idea of using a tool unless you had
enough brain to help you see cause and effect. And it is rather hard to
see how your hand and brain would develop unless they had something to
practice on--like using tools. In Professor Krogmans words, the hand
must become the obedient servant of the eye and the brain. It is the
_co-ordination_ of these things that counts.
Many other things must have been happening to the bodies of the
creatures who were the ancestors of men. Our ancestors had to develop
organs of speech. More than that, they had to get the idea of letting
_certain sounds_ made with these speech organs have _certain meanings_.
All this must have gone very slowly. Probably everything was developing
little by little, all together. Men became men very slowly.
WHEN SHALL WE CALL MEN MEN?
What do I mean when I say men? People who looked pretty much as we
do, and who used different tools to do different things, are men to me.
Well probably never know whether the earliest ones talked or not. They
probably had vocal cords, so they could make sounds, but did they know
how to make sounds work as symbols to carry meanings? But if the fossil
bones look like our skeletons, and if we find tools which well agree
couldnt have been made by nature or by animals, then Id say we had
traces of _men_.
The australopithecine finds of the Transvaal and Bechuanaland, in
south Africa, are bound to come into the discussion here. Ive already
told you that the australopithecines could have stood upright and
walked on their two hind legs. They come from the very base of the
Pleistocene or Ice Age, and a few coarse stone tools have been found
with the australopithecine fossils. But there are three varieties
of the australopithecines and they last on until a time equal to
that of the second alpine glaciation. They are the best suggestion
we have yet as to what the ancestors of men _may_ have looked like.
They were certainly closer to men than to apes. Although their brain
size was no larger than the brains of modern apes their body size and
stature were quite small; hence, relative to their small size, their
brains were large. We have not been able to prove without doubt that
the australopithecines were _tool-making_ creatures, even though the
recent news has it that tools have been found with australopithecine
bones. The doubt as to whether the australopithecines used the tools
themselves goes like this--just suppose some man-like creature (whose
bones we have not yet found) made the tools and used them to kill
and butcher australopithecines. Hence a few experts tend to let
australopithecines still hang in limbo as man-apes.
THE EARLIEST MEN WE KNOW
Ill postpone talking about the tools of early men until the next
chapter. The men whose bones were the earliest of the Java lot have
been given the name _Meganthropus_. The bones are very fragmentary. We
would not understand them very well unless we had the somewhat later
Javanese lot--the more commonly known _Pithecanthropus_ or Java
man--against which to refer them for study. One of the less well-known
and earliest fragments, a piece of lower jaw and some teeth, rather
strongly resembles the lower jaws and teeth of the australopithecine
type. Was _Meganthropus_ a sort of half-way point between the
australopithecines and _Pithecanthropus_? It is still too early to say.
We shall need more finds before we can be definite one way or the other.
Java man, _Pithecanthropus_, comes from geological beds equal in age
to the latter part of the second alpine glaciation; the _Meganthropus_
finds refer to beds of the beginning of this glaciation. The first
finds of Java man were made in 1891-92 by Dr. Eugene Dubois, a Dutch
doctor in the colonial service. Finds have continued to be made. There
are now bones enough to account for four skulls. There are also four
jaws and some odd teeth and thigh bones. Java man, generally speaking,
was about five feet six inches tall, and didnt hold his head very
erect. His skull was very thick and heavy and had room for little more
than two-thirds as large a brain as we have. He had big teeth and a big
jaw and enormous eyebrow ridges.
No tools were found in the geological deposits where bones of Java man
appeared. There are some tools in the same general area, but they come
a bit later in time. One reason we accept the Java man as man--aside
from his general anatomical appearance--is that these tools probably
belonged to his near descendants.
Remember that there are several varieties of men in the whole early
Java lot, at least two of which are earlier than the _Pithecanthropus_,
Java man. Some of the earlier ones seem to have gone in for
bigness, in tooth-size at least. _Meganthropus_ is one of these
earlier varieties. As we said, he _may_ turn out to be a link to
the australopithecines, who _may_ or _may not_ be ancestral to men.
_Meganthropus_ is best understandable in terms of _Pithecanthropus_,
who appeared later in the same general area. _Pithecanthropus_ is
pretty well understandable from the bones he left us, and also because
of his strong resemblance to the fully tool-using cave-dwelling Peking
man, _Sinanthropus_, about whom we shall talk next. But you can see
that the physical anthropologists and prehistoric archeologists still
have a lot of work to do on the problem of earliest men.
PEKING MEN AND SOME EARLY WESTERNERS
The earliest known Chinese are called _Sinanthropus_, or Peking man,
because the finds were made near that city. In World War II, the United
States Marine guard at our Embassy in Peking tried to help get the
bones out of the city before the Japanese attack. Nobody knows where
these bones are now. The Red Chinese accuse us of having stolen them.
They were last seen on a dock-side at a Chinese port. But should you
catch a Marine with a sack of old bones, perhaps we could achieve peace
in Asia by returning them! Fortunately, there is a complete set of
casts of the bones.
Peking man lived in a cave in a limestone hill, made tools, cracked
animal bones to get the marrow out, and used fire. Incidentally, the
bones of Peking man were found because Chinese dig for what they call
dragon bones and dragon teeth. Uneducated Chinese buy these things
in their drug stores and grind them into powder for medicine. The
dragon teeth and bones are really fossils of ancient animals, and
sometimes of men. The people who supply the drug stores have learned
where to dig for strange bones and teeth. Paleontologists who get to
China go to the drug stores to buy fossils. In a roundabout way, this
is how the fallen-in cave of Peking man at Choukoutien was discovered.
Peking man was not quite as tall as Java man but he probably stood
straighter. His skull looked very much like that of the Java skull
except that it had room for a slightly larger brain. His face was less
brutish than was Java mans face, but this isnt saying much.
Peking man dates from early in the interglacial period following the
second alpine glaciation. He probably lived close to 350,000 years
ago. There are several finds to account for in Europe by about this
time, and one from northwest Africa. The very large jawbone found
near Heidelberg in Germany is doubtless even earlier than Peking man.
The beds where it was found are of second alpine glacial times, and
recently some tools have been said to have come from the same beds.
There is not much I need tell you about the Heidelberg jaw save that it
seems certainly to have belonged to an early man, and that it is very
big.
Another find in Germany was made at Steinheim. It consists of the
fragmentary skull of a man. It is very important because of its
relative completeness, but it has not yet been fully studied. The bone
is thick, but the back of the head is neither very low nor primitive,
and the face is also not primitive. The forehead does, however, have
big ridges over the eyes. The more fragmentary skull from Swanscombe in
England (p. 11) has been much more carefully studied. Only the top and
back of that skull have been found. Since the skull rounds up nicely,
it has been assumed that the face and forehead must have been quite
modern. Careful comparison with Steinheim shows that this was not
necessarily so. This is important because it bears on the question of
how early truly modern man appeared.
Recently two fragmentary jaws were found at Ternafine in Algeria,
northwest Africa. They look like the jaws of Peking man. Tools were
found with them. Since no jaws have yet been found at Steinheim or
Swanscombe, but the time is the same, one wonders if these people had
jaws like those of Ternafine.
WHAT HAPPENED TO JAVA AND PEKING MEN
Professor Weidenreich thought that there were at least a dozen ways in
which the Peking man resembled the modern Mongoloids. This would seem
to indicate that Peking man was really just a very early Chinese.
Several later fossil men have been found in the Java-Australian area.
The best known of these is the so-called Solo man. There are some finds
from Australia itself which we now know to be quite late. But it looks
as if we may assume a line of evolution from Java man down to the
modern Australian natives. During parts of the Ice Age there was a land
bridge all the way from Java to Australia.
TWO ENGLISHMEN WHO WERENT OLD
The older textbooks contain descriptions of two English finds which
were thought to be very old. These were called Piltdown (_Eoanthropus
dawsoni_) and Galley Hill. The skulls were very modern in appearance.
In 1948-49, British scientists began making chemical tests which proved
that neither of these finds is very old. It is now known that both
Piltdown man and the tools which were said to have been found with
him were part of an elaborate fake!
TYPICAL CAVE MEN
The next men we have to talk about are all members of a related group.
These are the Neanderthal group. Neanderthal man himself was found in
the Neander Valley, near Dsseldorf, Germany, in 1856. He was the first
human fossil to be recognized as such.
[Illustration: PRINCIPAL KNOWN TYPES OF FOSSIL MEN
CRO-MAGNON
NEANDERTHAL
MODERN SKULL
COMBE-CAPELLE
SINANTHROPUS
PITHECANTHROPUS]
Some of us think that the neanderthaloids proper are only those people
of western Europe who didnt get out before the beginning of the last
great glaciation, and who found themselves hemmed in by the glaciers
in the Alps and northern Europe. Being hemmed in, they intermarried
a bit too much and developed into a special type. Professor F. Clark
Howell sees it this way. In Europe, the earliest trace of men we
now know is the Heidelberg jaw. Evolution continued in Europe, from
Heidelberg through the Swanscombe and Steinheim types to a group of
pre-neanderthaloids. There are traces of these pre-neanderthaloids
pretty much throughout Europe during the third interglacial period--say
100,000 years ago. The pre-neanderthaloids are represented by such
finds as the ones at Ehringsdorf in Germany and Saccopastore in Italy.
I wont describe them for you, since they are simply less extreme than
the neanderthaloids proper--about half way between Steinheim and the
classic Neanderthal people.
Professor Howell believes that the pre-neanderthaloids who happened to
get caught in the pocket of the southwest corner of Europe at the onset
of the last great glaciation became the classic Neanderthalers. Out in
the Near East, Howell thinks, it is possible to see traces of people
evolving from the pre-neanderthaloid type toward that of fully modern
man. Certainly, we dont see such extreme cases of neanderthaloidism
outside of western Europe.
There are at least a dozen good examples in the main or classic
Neanderthal group in Europe. They date to just before and in the
earlier part of the last great glaciation (85,000 to 40,000 years ago).
Many of the finds have been made in caves. The cave men the movies
and the cartoonists show you are probably meant to be Neanderthalers.
Im not at all sure they dragged their women by the hair; the women
were probably pretty tough, too!
Neanderthal men had large bony heads, but plenty of room for brains.
Some had brain cases even larger than the average for modern man. Their
faces were heavy, and they had eyebrow ridges of bone, but the ridges
were not as big as those of Java man. Their foreheads were very low,
and they didnt have much chin. They were about five feet three inches
tall, but were heavy and barrel-chested. But the Neanderthalers didnt
slouch as much as theyve been blamed for, either.
One important thing about the Neanderthal group is that there is a fair
number of them to study. Just as important is the fact that we know
something about how they lived, and about some of the tools they made.
OTHER MEN CONTEMPORARY WITH THE NEANDERTHALOIDS
We have seen that the neanderthaloids seem to be a specialization
in a corner of Europe. What was going on elsewhere? We think that
the pre-neanderthaloid type was a generally widespread form of men.
From this type evolved other more or less extreme although generally
related men. The Solo finds in Java form one such case. Another was the
Rhodesian man of Africa, and the more recent Hopefield finds show more
of the general Rhodesian type. It is more confusing than it needs to be
if these cases outside western Europe are called neanderthaloids. They
lived during the same approximate time range but they were all somewhat
different-looking people.
EARLY MODERN MEN
How early is modern man (_Homo sapiens_), the wise man? Some people
have thought that he was very early, a few still think so. Piltdown
and Galley Hill, which were quite modern in anatomical appearance and
_supposedly_ very early in date, were the best evidence for very
early modern men. Now that Piltdown has been liquidated and Galley Hill
is known to be very late, what is left of the idea?
The backs of the skulls of the Swanscombe and Steinheim finds look
rather modern. Unless you pay attention to the face and forehead of the
Steinheim find--which not many people have--and perhaps also consider
the Ternafine jaws, you might come to the conclusion that the crown of
the Swanscombe head was that of a modern-like man.
Two more skulls, again without faces, are available from a French
cave site, Fontchevade. They come from the time of the last great
interglacial, as did the pre-neanderthaloids. The crowns of the
Fontchevade skulls also look quite modern. There is a bit of the
forehead preserved on one of these skulls and the brow-ridge is not
heavy. Nevertheless, there is a suggestion that the bones belonged to
an immature individual. In this case, his (or even more so, if _her_)
brow-ridges would have been weak anyway. The case for the Fontchevade
fossils, as modern type men, is little stronger than that for
Swanscombe, although Professor Vallois believes it a good case.
It seems to add up to the fact that there were people living in
Europe--before the classic neanderthaloids--who looked more modern,
in some features, than the classic western neanderthaloids did. Our
best suggestion of what men looked like--just before they became fully
modern--comes from a cave on Mount Carmel in Palestine.
THE FIRST MODERNS
Professor T. D. McCown and the late Sir Arthur Keith, who studied the
Mount Carmel bones, figured out that one of the two groups involved
was as much as 70 per cent modern. There were, in fact, two groups or
varieties of men in the Mount Carmel caves and in at least two other
Palestinian caves of about the same time. The time would be about that
of the onset of colder weather, when the last glaciation was beginning
in the north--say 75,000 years ago.
The 70 per cent modern group came from only one cave, Mugharet es-Skhul
(cave of the kids). The other group, from several caves, had bones of
men of the type weve been calling pre-neanderthaloid which we noted
were widespread in Europe and beyond. The tools which came with each
of these finds were generally similar, and McCown and Keith, and other
scholars since their study, have tended to assume that both the Skhul
group and the pre-neanderthaloid group came from exactly the same time.
The conclusion was quite natural: here was a population of men in the
act of evolving in two different directions. But the time may not be
exactly the same. It is very difficult to be precise, within say 10,000
years, for a time some 75,000 years ago. If the Skhul men are in fact
later than the pre-neanderthaloid group of Palestine, as some of us
think, then they show how relatively modern some men were--men who
lived at the same time as the classic Neanderthalers of the European
pocket.
Soon after the first extremely cold phase of the last glaciation, we
begin to get a number of bones of completely modern men in Europe.
We also get great numbers of the tools they made, and their living
places in caves. Completely modern skeletons begin turning up in caves
dating back to toward 40,000 years ago. The time is about that of the
beginning of the second phase of the last glaciation. These skeletons
belonged to people no different from many people we see today. Like
people today, not everybody looked alike. (The positions of the more
important fossil men of later Europe are shown in the chart on page
72.)
DIFFERENCES IN THE EARLY MODERNS
The main early European moderns have been divided into two groups, the
Cro-Magnon group and the Combe Capelle-Brnn group. Cro-Magnon people
were tall and big-boned, with large, long, and rugged heads. They
must have been built like many present-day Scandinavians. The Combe
Capelle-Brnn people were shorter; they had narrow heads and faces, and
big eyebrow-ridges. Of course we dont find the skin or hair of these
people. But there is little doubt they were Caucasoids (Whites).
Another important find came in the Italian Riviera, near Monte Carlo.
Here, in a cave near Grimaldi, there was a grave containing a woman
and a young boy, buried together. The two skeletons were first called
Negroid because some features of their bones were thought to resemble
certain features of modern African Negro bones. But more recently,
Professor E. A. Hooton and other experts questioned the use of the word
Negroid in describing the Grimaldi skeletons. It is true that nothing
is known of the skin color, hair form, or any other fleshy feature of
the Grimaldi people, so that the word Negroid in its usual meaning is
not proper here. It is also not clear whether the features of the bones
claimed to be Negroid are really so at all.
From a place called Wadjak, in Java, we have proto-Australoid skulls
which closely resemble those of modern Australian natives. Some of
the skulls found in South Africa, especially the Boskop skull, look
like those of modern Bushmen, but are much bigger. The ancestors of
the Bushmen seem to have once been very widespread south of the Sahara
Desert. True African Negroes were forest people who apparently expanded
out of the west central African area only in the last several thousand
years. Although dark in skin color, neither the Australians nor the
Bushmen are Negroes; neither the Wadjak nor the Boskop skulls are
Negroid.
As weve already mentioned, Professor Weidenreich believed that Peking
man was already on the way to becoming a Mongoloid. Anyway, the
Mongoloids would seem to have been present by the time of the Upper
Cave at Choukoutien, the _Sinanthropus_ find-spot.
WHAT THE DIFFERENCES MEAN
What does all this difference mean? It means that, at one moment in
time, within each different area, men tended to look somewhat alike.
From area to area, men tended to look somewhat different, just as
they do today. This is all quite natural. People _tended_ to mate
near home; in the anthropological jargon, they made up geographically
localized breeding populations. The simple continental division of
stocks--black = Africa, yellow = Asia, white = Europe--is too simple
a picture to fit the facts. People became accustomed to life in some
particular area within a continent (we might call it a natural area).
As they went on living there, they evolved towards some particular
physical variety. It would, of course, have been difficult to draw
a clear boundary between two adjacent areas. There must always have
been some mating across the boundaries in every case. One thing human
beings dont do, and never have done, is to mate for purity. It is
self-righteous nonsense when we try to kid ourselves into thinking that
they do.
I am not going to struggle with the whole business of modern stocks and
races. This is a book about prehistoric men, not recent historic or
modern men. My physical anthropologist friends have been very patient
in helping me to write and rewrite this chapter--I am not going to
break their patience completely. Races are their business, not mine,
and they must do the writing about races. I shall, however, give two
modern definitions of race, and then make one comment.
Dr. William G. Boyd, professor of Immunochemistry, School of
Medicine, Boston University: We may define a human race as a
population which differs significantly from other human populations
in regard to the frequency of one or more of the genes it
possesses.
Professor Sherwood L. Washburn, professor of Physical Anthropology,
Department of Anthropology, the University of California: A race
is a group of genetically similar populations, and races intergrade
because there are always intermediate populations.
My comment is that the ideas involved here are all biological: they
concern groups, _not_ individuals. Boyd and Washburn may differ a bit
on what they want to consider a population, but a population is a
group nevertheless, and genetics is biology to the hilt. Now a lot of
people still think of race in terms of how people dress or fix their
food or of other habits or customs they have. The next step is to talk
about racial purity. None of this has anything whatever to do with
race proper, which is a matter of the biology of groups.
Incidentally, Im told that if man very carefully _controls_
the breeding of certain animals over generations--dogs, cattle,
chickens--he might achieve a pure race of animals. But he doesnt do
it. Some unfortunate genetic trait soon turns up, so this has just as
carefully to be bred out again, and so on.
SUMMARY OF PRESENT KNOWLEDGE OF FOSSIL MEN
The earliest bones of men we now have--upon which all the experts
would probably agree--are those of _Meganthropus_, from Java, of about
450,000 years ago. The earlier australopithecines of Africa were
possibly not tool-users and may not have been ancestral to men at all.
But there is an alternate and evidently increasingly stronger chance
that some of them may have been. The Kanam jaw from Kenya, another
early possibility, is not only very incomplete but its find-spot is
very questionable.
Java man proper, _Pithecanthropus_, comes next, at about 400,000 years
ago, and the big Heidelberg jaw in Germany must be of about the same
date. Next comes Swanscombe in England, Steinheim in Germany, the
Ternafine jaws in Algeria, and Peking man, _Sinanthropus_. They all
date to the second great interglacial period, about 350,000 years ago.
Piltdown and Galley Hill are out, and with them, much of the starch
in the old idea that there were two distinct lines of development
in human evolution: (1) a line of paleoanthropic development from
Heidelberg to the Neanderthalers where it became extinct, and (2) a
very early modern line, through Piltdown, Galley Hill, Swanscombe, to
us. Swanscombe, Steinheim, and Ternafine are just as easily cases of
very early pre-neanderthaloids.
The pre-neanderthaloids were very widespread during the third
interglacial: Ehringsdorf, Saccopastore, some of the Mount Carmel
people, and probably Fontchevade are cases in point. A variety of
their descendants can be seen, from Java (Solo), Africa (Rhodesian
man), and about the Mediterranean and in western Europe. As the acute
cold of the last glaciation set in, the western Europeans found
themselves surrounded by water, ice, or bitter cold tundra. To vastly
over-simplify it, they bred in and became classic neanderthaloids.
But on Mount Carmel, the Skhul cave-find with its 70 per cent modern
features shows what could happen elsewhere at the same time.
Lastly, from about 40,000 or 35,000 years ago--the time of the onset
of the second phase of the last glaciation--we begin to find the fully
modern skeletons of men. The modern skeletons differ from place to
place, just as different groups of men living in different places still
look different.
What became of the Neanderthalers? Nobody can tell me for sure. Ive a
hunch they were simply bred out again when the cold weather was over.
Many Americans, as the years go by, are no longer ashamed to claim they
have Indian blood in their veins. Give us a few more generations
and there will not be very many other Americans left to whom we can
brag about it. It certainly isnt inconceivable to me to imagine a
little Cro-Magnon boy bragging to his friends about his tough, strong,
Neanderthaler great-great-great-great-grandfather!
Cultural BEGINNINGS
[Illustration]
Men, unlike the lower animals, are made up of much more than flesh and
blood and bones; for men have culture.
WHAT IS CULTURE?
Culture is a word with many meanings. The doctors speak of making a
culture of a certain kind of bacteria, and ants are said to have a
culture. Then there is the Emily Post kind of culture--you say a
person is cultured, or that he isnt, depending on such things as
whether or not he eats peas with his knife.
The anthropologists use the word too, and argue heatedly over its finer
meanings; but they all agree that every human being is part of or has
some kind of culture. Each particular human group has a particular
culture; that is one of the ways in which we can tell one group of
men from another. In this sense, a CULTURE means the way the members
of a group of people think and believe and live, the tools they make,
and the way they do things. Professor Robert Redfield says a culture
is an organized or formalized body of conventional understandings.
Conventional understandings means the whole set of rules, beliefs,
and standards which a group of people lives by. These understandings
show themselves in art, and in the other things a people may make and
do. The understandings continue to last, through tradition, from one
generation to another. They are what really characterize different
human groups.
SOME CHARACTERISTICS OF CULTURE
A culture lasts, although individual men in the group die off. On
the other hand, a culture changes as the different conventions and
understandings change. You could almost say that a culture lives in the
minds of the men who have it. But people are not born with it; they
get it as they grow up. Suppose a day-old Hungarian baby is adopted by
a family in Oshkosh, Wisconsin, and the child is not told that he is
Hungarian. He will grow up with no more idea of Hungarian culture than
anyone else in Oshkosh.
So when I speak of ancient Egyptian culture, I mean the whole body
of understandings and beliefs and knowledge possessed by the ancient
Egyptians. I mean their beliefs as to why grain grew, as well as their
ability to make tools with which to reap the grain. I mean their
beliefs about life after death. What I am thinking about as culture is
a thing which lasted in time. If any one Egyptian, even the Pharaoh,
died, it didnt affect the Egyptian culture of that particular moment.
PREHISTORIC CULTURES
For that long period of mans history that is all prehistory, we have
no written descriptions of cultures. We find only the tools men made,
the places where they lived, the graves in which they buried their
dead. Fortunately for us, these tools and living places and graves all
tell us something about the ways these men lived and the things they
believed. But the story we learn of the very early cultures must be
only a very small part of the whole, for we find so few things. The
rest of the story is gone forever. We have to do what we can with what
we find.
For all of the time up to about 75,000 years ago, which was the time
of the classic European Neanderthal group of men, we have found few
cave-dwelling places of very early prehistoric men. First, there is the
fallen-in cave where Peking man was found, near Peking. Then there are
two or three other _early_, but not _very early_, possibilities. The
finds at the base of the French cave of Fontchevade, those in one of
the Makapan caves in South Africa, and several open sites such as Dr.
L. S. B. Leakeys Olorgesailie in Kenya doubtless all lie earlier than
the time of the main European Neanderthal group, but none are so early
as the Peking finds.
You can see that we know very little about the home life of earlier
prehistoric men. We find different kinds of early stone tools, but we
cant even be really sure which tools may have been used together.
WHY LITTLE HAS LASTED FROM EARLY TIMES
Except for the rare find-spots mentioned above, all our very early
finds come from geological deposits, or from the wind-blown surfaces
of deserts. Here is what the business of geological deposits really
means. Let us say that a group of people was living in England about
300,000 years ago. They made the tools they needed, lived in some sort
of camp, almost certainly built fires, and perhaps buried their dead.
While the climate was still warm, many generations may have lived in
the same place, hunting, and gathering nuts and berries; but after some
few thousand years, the weather began very gradually to grow colder.
These early Englishmen would not have known that a glacier was forming
over northern Europe. They would only have noticed that the animals
they hunted seemed to be moving south, and that the berries grew larger
toward the south. So they would have moved south, too.
The camp site they left is the place we archeologists would really have
liked to find. All of the different tools the people used would have
been there together--many broken, some whole. The graves, and traces
of fire, and the tools would have been there. But the glacier got
there first! The front of this enormous sheet of ice moved down over
the country, crushing and breaking and plowing up everything, like a
gigantic bulldozer. You can see what happened to our camp site.
Everything the glacier couldnt break, it pushed along in front of it
or plowed beneath it. Rocks were ground to gravel, and soil was caught
into the ice, which afterwards melted and ran off as muddy water. Hard
tools of flint sometimes remained whole. Human bones werent so hard;
its a wonder _any_ of them lasted. Gushing streams of melt water
flushed out the debris from underneath the glacier, and water flowed
off the surface and through great crevasses. The hard materials these
waters carried were even more rolled and ground up. Finally, such
materials were dropped by the rushing waters as gravels, miles from
the front of the glacier. At last the glacier reached its greatest
extent; then it melted backward toward the north. Debris held in the
ice was dropped where the ice melted, or was flushed off by more melt
water. When the glacier, leaving the land, had withdrawn to the sea,
great hunks of ice were broken off as icebergs. These icebergs probably
dropped the materials held in their ice wherever they floated and
melted. There must be many tools and fragmentary bones of prehistoric
men on the bottom of the Atlantic Ocean and the North Sea.
Remember, too, that these glaciers came and went at least three or four
times during the Ice Age. Then you will realize why the earlier things
we find are all mixed up. Stone tools from one camp site got mixed up
with stone tools from many other camp sites--tools which may have been
made tens of thousands or more years apart. The glaciers mixed them
all up, and so we cannot say which particular sets of tools belonged
together in the first place.
EOLITHS
But what sort of tools do we find earliest? For almost a century,
people have been picking up odd bits of flint and other stone in the
oldest Ice Age gravels in England and France. It is now thought these
odd bits of stone werent actually worked by prehistoric men. The
stones were given a name, _eoliths_, or dawn stones. You can see them
in many museums; but you can be pretty sure that very few of them were
actually fashioned by men.
It is impossible to pick out eoliths that seem to be made in any
one _tradition_. By tradition I mean a set of habits for making one
kind of tool for some particular job. No two eoliths look very much
alike: tools made as part of some one tradition all look much alike.
Now its easy to suppose that the very earliest prehistoric men picked
up and used almost any sort of stone. This wouldnt be surprising; you
and I do it when we go camping. In other words, some of these eoliths
may actually have been used by prehistoric men. They must have used
anything that might be handy when they needed it. We could have figured
that out without the eoliths.
THE ROAD TO STANDARDIZATION
Reasoning from what we know or can easily imagine, there should have
been three major steps in the prehistory of tool-making. The first step
would have been simple _utilization_ of what was at hand. This is the
step into which the eoliths would fall. The second step would have
been _fashioning_--the haphazard preparation of a tool when there was a
need for it. Probably many of the earlier pebble tools, which I shall
describe next, fall into this group. The third step would have been
_standardization_. Here, men began to make tools according to certain
set traditions. Counting the better-made pebble tools, there are four
such traditions or sets of habits for the production of stone tools in
earliest prehistoric times. Toward the end of the Pleistocene, a fifth
tradition appears.
PEBBLE TOOLS
At the beginning of the last chapter, youll remember that I said there
were tools from very early geological beds. The earliest bones of men
have not yet been found in such early beds although the Sterkfontein
australopithecine cave approaches this early date. The earliest tools
come from Africa. They date back to the time of the first great
alpine glaciation and are at least 500,000 years old. The earliest
ones are made of split pebbles, about the size of your fist or a bit
bigger. They go under the name of pebble tools. There are many natural
exposures of early Pleistocene geological beds in Africa, and the
prehistoric archeologists of south and central Africa have concentrated
on searching for early tools. Other finds of early pebble tools have
recently been made in Algeria and Morocco.
[Illustration: SOUTH AFRICAN PEBBLE TOOL]
There are probably early pebble tools to be found in areas of the
Old World besides Africa; in fact, some prehistorians already claim
to have identified a few. Since the forms and the distinct ways of
making the earlier pebble tools had not yet sufficiently jelled into
a set tradition, they are difficult for us to recognize. It is not
so difficult, however, if there are great numbers of possibles
available. A little later in time the tradition becomes more clearly
set, and pebble tools are easier to recognize. So far, really large
collections of pebble tools have only been found and examined in Africa.
CORE-BIFACE TOOLS
The next tradition well look at is the _core_ or biface one. The tools
are large pear-shaped pieces of stone trimmed flat on the two opposite
sides or faces. Hence biface has been used to describe these tools.
The front view is like that of a pear with a rather pointed top, and
the back view looks almost exactly the same. Look at them side on, and
you can see that the front and back faces are the same and have been
trimmed to a thin tip. The real purpose in trimming down the two faces
was to get a good cutting edge all around. You can see all this in the
illustration.
[Illustration: ABBEVILLIAN BIFACE]
We have very little idea of the way in which these core-bifaces were
used. They have been called hand axes, but this probably gives the
wrong idea, for an ax, to us, is not a pointed tool. All of these early
tools must have been used for a number of jobs--chopping, scraping,
cutting, hitting, picking, and prying. Since the core-bifaces tend to
be pointed, it seems likely that they were used for hitting, picking,
and prying. But they have rough cutting edges, so they could have been
used for chopping, scraping, and cutting.
FLAKE TOOLS
The third tradition is the _flake_ tradition. The idea was to get a
tool with a good cutting edge by simply knocking a nice large flake off
a big block of stone. You had to break off the flake in such a way that
it was broad and thin, and also had a good sharp cutting edge. Once you
really got on to the trick of doing it, this was probably a simpler way
to make a good cutting tool than preparing a biface. You have to know
how, though; Ive tried it and have mashed my fingers more than once.
The flake tools look as if they were meant mainly for chopping,
scraping, and cutting jobs. When one made a flake tool, the idea seems
to have been to produce a broad, sharp, cutting edge.
[Illustration: CLACTONIAN FLAKE]
The core-biface and the flake traditions were spread, from earliest
times, over much of Europe, Africa, and western Asia. The map on page
52 shows the general area. Over much of this great region there was
flint. Both of these traditions seem well adapted to flint, although
good core-bifaces and flakes were made from other kinds of stone,
especially in Africa south of the Sahara.
CHOPPERS AND ADZE-LIKE TOOLS
The fourth early tradition is found in southern and eastern Asia, from
northwestern India through Java and Burma into China. Father Maringer
recently reported an early group of tools in Japan, which most resemble
those of Java, called Patjitanian. The prehistoric men in this general
area mostly used quartz and tuff and even petrified wood for their
stone tools (see illustration, p. 46).
This fourth early tradition is called the _chopper-chopping tool_
tradition. It probably has its earliest roots in the pebble tool
tradition of African type. There are several kinds of tools in this
tradition, but all differ from the western core-bifaces and flakes.
There are broad, heavy scrapers or cleavers, and tools with an
adze-like cutting edge. These last-named tools are called hand adzes,
just as the core-bifaces of the west have often been called hand
axes. The section of an adze cutting edge is ? shaped; the section of
an ax is < shaped.
[Illustration: ANYATHIAN ADZE-LIKE TOOL]
There are also pointed pebble tools. Thus the tool kit of these early
south and east Asiatic peoples seems to have included tools for doing
as many different jobs as did the tools of the Western traditions.
Dr. H. L. Movius has emphasized that the tools which were found in the
Peking cave with Peking man belong to the chopper-tool tradition. This
is the only case as yet where the tools and the man have been found
together from very earliest times--if we except Sterkfontein.
DIFFERENCES WITHIN THE TOOL-MAKING TRADITIONS
The latter three great traditions in the manufacture of stone
tools--and the less clear-cut pebble tools before them--are all we have
to show of the cultures of the men of those times. Changes happened in
each of the traditions. As time went on, the tools in each tradition
were better made. There could also be slight regional differences in
the tools within one tradition. Thus, tools with small differences, but
all belonging to one tradition, can be given special group (facies)
names.
This naming of special groups has been going on for some time. Here are
some of these names, since you may see them used in museum displays
of flint tools, or in books. Within each tradition of tool-making
(save the chopper tools), the earliest tool type is at the bottom
of the list, just as it appears in the lowest beds of a geological
stratification.[3]
[3] Archeologists usually make their charts and lists with the
earliest materials at the bottom and the latest on top, since
this is the way they find them in the ground.
Chopper tool (all about equally early):
Anyathian (Burma)
Choukoutienian (China)
Patjitanian (Java)
Soan (India)
Flake:
Typical Mousterian
Levalloiso-Mousterian
Levalloisian
Tayacian
Clactonian (localized in England)
Core-biface:
Some blended elements in Mousterian
Micoquian (= Acheulean 6 and 7)
Acheulean
Abbevillian (once called Chellean)
Pebble tool:
Oldowan
Ain Hanech
pre-Stellenbosch
Kafuan
The core-biface and the flake traditions appear in the chart (p. 65).
The early archeologists had many of the tool groups named before they
ever realized that there were broader tool preparation traditions. This
was understandable, for in dealing with the mixture of things that come
out of glacial gravels the easiest thing to do first is to isolate
individual types of tools into groups. First you put a bushel-basketful
of tools on a table and begin matching up types. Then you give names to
the groups of each type. The groups and the types are really matters of
the archeologists choice; in real life, they were probably less exact
than the archeologists lists of them. We now know pretty well in which
of the early traditions the various early groups belong.
THE MEANING OF THE DIFFERENT TRADITIONS
What do the traditions really mean? I see them as the standardization
of ways to make tools for particular jobs. We may not know exactly what
job the maker of a particular core-biface or flake tool had in mind. We
can easily see, however, that he already enjoyed a know-how, a set of
persistent habits of tool preparation, which would always give him the
same type of tool when he wanted to make it. Therefore, the traditions
show us that persistent habits already existed for the preparation of
one type of tool or another.
This tells us that one of the characteristic aspects of human culture
was already present. There must have been, in the minds of these
early men, a notion of the ideal type of tool for a particular job.
Furthermore, since we find so many thousands upon thousands of tools
of one type or another, the notion of the ideal types of tools _and_
the know-how for the making of each type must have been held in common
by many men. The notions of the ideal types and the know-how for their
production must have been passed on from one generation to another.
I could even guess that the notions of the ideal type of one or the
other of these tools stood out in the minds of men of those times
somewhat like a symbol of perfect tool for good job. If this were
so--remember its only a wild guess of mine--then men were already
symbol users. Now lets go on a further step to the fact that the words
men speak are simply sounds, each different sound being a symbol for a
different meaning. If standardized tool-making suggests symbol-making,
is it also possible that crude word-symbols were also being made? I
suppose that it is not impossible.
There may, of course, be a real question whether tool-utilizing
creatures--our first step, on page 42--were actually men. Other
animals utilize things at hand as tools. The tool-fashioning creature
of our second step is more suggestive, although we may not yet feel
sure that many of the earlier pebble tools were man-made products. But
with the step to standardization and the appearance of the traditions,
I believe we must surely be dealing with the traces of culture-bearing
_men_. The conventional understandings which Professor Redfields
definition of culture suggests are now evidenced for us in the
persistent habits for the preparation of stone tools. Were we able to
see the other things these prehistoric men must have made--in materials
no longer preserved for the archeologist to find--I believe there would
be clear signs of further conventional understandings. The men may have
been physically primitive and pretty shaggy in appearance, but I think
we must surely call them men.
AN OLDER INTERPRETATION OF THE WESTERN TRADITIONS
In the last chapter, I told you that many of the older archeologists
and human paleontologists used to think that modern man was very old.
The supposed ages of Piltdown and Galley Hill were given as evidence
of the great age of anatomically modern man, and some interpretations
of the Swanscombe and Fontchevade fossils were taken to support
this view. The conclusion was that there were two parallel lines or
phyla of men already present well back in the Pleistocene. The
first of these, the more primitive or paleoanthropic line, was
said to include Heidelberg, the proto-neanderthaloids and classic
Neanderthal. The more anatomically modern or neanthropic line was
thought to consist of Piltdown and the others mentioned above. The
Neanderthaler or paleoanthropic line was thought to have become extinct
after the first phase of the last great glaciation. Of course, the
modern or neanthropic line was believed to have persisted into the
present, as the basis for the worlds population today. But with
Piltdown liquidated, Galley Hill known to be very late, and Swanscombe
and Fontchevade otherwise interpreted, there is little left of the
so-called parallel phyla theory.
While the theory was in vogue, however, and as long as the European
archeological evidence was looked at in one short-sighted way, the
archeological materials _seemed_ to fit the parallel phyla theory. It
was simply necessary to believe that the flake tools were made only
by the paleoanthropic Neanderthaler line, and that the more handsome
core-biface tools were the product of the neanthropic modern-man line.
Remember that _almost_ all of the early prehistoric European tools
came only from the redeposited gravel beds. This means that the tools
were not normally found in the remains of camp sites or work shops
where they had actually been dropped by the men who made and used
them. The tools came, rather, from the secondary hodge-podge of the
glacial gravels. I tried to give you a picture of the bulldozing action
of glaciers (p. 40) and of the erosion and weathering that were
side-effects of a glacially conditioned climate on the earths surface.
As we said above, if one simply plucks tools out of the redeposited
gravels, his natural tendency is to type the tools by groups, and to
think that the groups stand for something _on their own_.
In 1906, M. Victor Commont actually made a rare find of what seems
to have been a kind of workshop site, on a terrace above the Somme
river in France. Here, Commont realized, flake tools appeared clearly
in direct association with core-biface tools. Few prehistorians paid
attention to Commont or his site, however. It was easier to believe
that flake tools represented a distinct culture and that this
culture was that of the Neanderthaler or paleoanthropic line, and
that the core-bifaces stood for another culture which was that of the
supposed early modern or neanthropic line. Of course, I am obviously
skipping many details here. Some later sites with Neanderthal fossils
do seem to have only flake tools, but other such sites have both types
of tools. The flake tools which appeared _with_ the core-bifaces
in the Swanscombe gravels were never made much of, although it
was embarrassing for the parallel phyla people that Fontchevade
ran heavily to flake tools. All in all, the parallel phyla theory
flourished because it seemed so neat and easy to understand.
TRADITIONS ARE TOOL-MAKING HABITS, NOT CULTURES
In case you think I simply enjoy beating a dead horse, look in any
standard book on prehistory written twenty (or even ten) years ago, or
in most encyclopedias. Youll find that each of the individual tool
types, of the West, at least, was supposed to represent a culture.
The cultures were believed to correspond to parallel lines of human
evolution.
In 1937, Mr. Harper Kelley strongly re-emphasized the importance
of Commonts workshop site and the presence of flake tools with
core-bifaces. Next followed Dr. Movius clear delineation of the
chopper-chopping tool tradition of the Far East. This spoiled the nice
symmetry of the flake-tool = paleoanthropic, core-biface = neanthropic
equations. Then came increasing understanding of the importance of
the pebble tools in Africa, and the location of several more workshop
sites there, especially at Olorgesailie in Kenya. Finally came the
liquidation of Piltdown and the deflation of Galley Hills date. So it
is at last possible to picture an individual prehistoric man making a
flake tool to do one job and a core-biface tool to do another. Commont
showed us this picture in 1906, but few believed him.
[Illustration: DISTRIBUTION OF TOOL-PREPARATION TRADITIONS
Time approximately 100,000 years ago]
There are certainly a few cases in which flake tools did appear with
few or no core-bifaces. The flake-tool group called Clactonian in
England is such a case. Another good, but certainly later case is
that of the cave on Mount Carmel in Palestine, where the blended
pre-neanderthaloid, 70 per cent modern-type skulls were found. Here, in
the same level with the skulls, were 9,784 flint tools. Of these, only
three--doubtless strays--were core-bifaces; all the rest were flake
tools or flake chips. We noted above how the Fontchevade cave ran to
flake tools. The only conclusion I would draw from this is that times
and circumstances did exist in which prehistoric men needed only flake
tools. So they only made flake tools for those particular times and
circumstances.
LIFE IN EARLIEST TIMES
What do we actually know of life in these earliest times? In the
glacial gravels, or in the terrace gravels of rivers once swollen by
floods of melt water or heavy rains, or on the windswept deserts, we
find stone tools. The earliest and coarsest of these are the pebble
tools. We do not yet know what the men who made them looked like,
although the Sterkfontein australopithecines probably give us a good
hint. Then begin the more formal tool preparation traditions of the
west--the core-bifaces and the flake tools--and the chopper-chopping
tool series of the farther east. There is an occasional roughly worked
piece of bone. From the gravels which yield the Clactonian flakes of
England comes the fire-hardened point of a wooden spear. There are
also the chance finds of the fossil human bones themselves, of which
we spoke in the last chapter. Aside from the cave of Peking man, none
of the earliest tools have been found in caves. Open air or workshop
sites which do not seem to have been disturbed later by some geological
agency are very rare.
The chart on page 65 shows graphically what the situation in
west-central Europe seems to have been. It is not yet certain whether
there were pebble tools there or not. The Fontchevade cave comes
into the picture about 100,000 years ago or more. But for the earlier
hundreds of thousands of years--below the red-dotted line on the
chart--the tools we find come almost entirely from the haphazard
mixture within the geological contexts.
The stone tools of each of the earlier traditions are the simplest
kinds of all-purpose tools. Almost any one of them could be used for
hacking, chopping, cutting, and scraping; so the men who used them must
have been living in a rough and ready sort of way. They found or hunted
their food wherever they could. In the anthropological jargon, they
were food-gatherers, pure and simple.
Because of the mixture in the gravels and in the materials they
carried, we cant be sure which animals these men hunted. Bones of
the larger animals turn up in the gravels, but they could just as
well belong to the animals who hunted the men, rather than the other
way about. We dont know. This is why camp sites like Commonts and
Olorgesailie in Kenya are so important when we do find them. The animal
bones at Olorgesailie belonged to various mammals of extremely large
size. Probably they were taken in pit-traps, but there are a number of
groups of three round stones on the site which suggest that the people
used bolas. The South American Indians used three-ball bolas, with the
stones in separate leather bags connected by thongs. These were whirled
and then thrown through the air so as to entangle the feet of a fleeing
animal.
Professor F. Clark Howell recently returned from excavating another
important open air site at Isimila in Tanganyika. The site yielded
the bones of many fossil animals and also thousands of core-bifaces,
flakes, and choppers. But Howells reconstruction of the food-getting
habits of the Isimila people certainly suggests that the word hunting
is too dignified for what they did; scavenging would be much nearer
the mark.
During a great part of this time the climate was warm and pleasant. The
second interglacial period (the time between the second and third great
alpine glaciations) lasted a long time, and during much of this time
the climate may have been even better than ours is now. We dont know
that earlier prehistoric men in Europe or Africa lived in caves. They
may not have needed to; much of the weather may have been so nice that
they lived in the open. Perhaps they didnt wear clothes, either.
WHAT THE PEKING CAVE-FINDS TELL US
The one early cave-dwelling we have found is that of Peking man, in
China. Peking man had fire. He probably cooked his meat, or used
the fire to keep dangerous animals away from his den. In the cave
were bones of dangerous animals, members of the wolf, bear, and cat
families. Some of the cat bones belonged to beasts larger than tigers.
There were also bones of other wild animals: buffalo, camel, deer,
elephants, horses, sheep, and even ostriches. Seventy per cent of the
animals Peking man killed were fallow deer. Its much too cold and dry
in north China for all these animals to live there today. So this list
helps us know that the weather was reasonably warm, and that there was
enough rain to grow grass for the grazing animals. The list also helps
the paleontologists to date the find.
Peking man also seems to have eaten plant food, for there are hackberry
seeds in the debris of the cave. His tools were made of sandstone and
quartz and sometimes of a rather bad flint. As weve already seen, they
belong in the chopper-tool tradition. It seems fairly clear that some
of the edges were chipped by right-handed people. There are also many
split pieces of heavy bone. Peking man probably split them so he could
eat the bone marrow, but he may have used some of them as tools.
Many of these split bones were the bones of Peking men. Each one of the
skulls had already had the base broken out of it. In no case were any
of the bones resting together in their natural relation to one another.
There is nothing like a burial; all of the bones are scattered. Now
its true that animals could have scattered bodies that were not cared
for or buried. But splitting bones lengthwise and carefully removing
the base of a skull call for both the tools and the people to use them.
Its pretty clear who the people were. Peking man was a cannibal.
* * * * *
This rounds out about all we can say of the life and times of early
prehistoric men. In those days life was rough. You evidently had to
watch out not only for dangerous animals but also for your fellow men.
You ate whatever you could catch or find growing. But you had sense
enough to build fires, and you had already formed certain habits for
making the kinds of stone tools you needed. Thats about all we know.
But I think well have to admit that cultural beginnings had been made,
and that these early people were really _men_.
MORE EVIDENCE of Culture
[Illustration]
While the dating is not yet sure, the material that we get from caves
in Europe must go back to about 100,000 years ago; the time of the
classic Neanderthal group followed soon afterwards. We dont know why
there is no earlier material in the caves; apparently they were not
used before the last interglacial phase (the period just before the
last great glaciation). We know that men of the classic Neanderthal
group were living in caves from about 75,000 to 45,000 years ago.
New radioactive carbon dates even suggest that some of the traces of
culture well describe in this chapter may have lasted to about 35,000
years ago. Probably some of the pre-neanderthaloid types of men had
also lived in caves. But we have so far found their bones in caves only
in Palestine and at Fontchevade.
THE CAVE LAYERS
In parts of France, some peasants still live in caves. In prehistoric
time, many generations of people lived in them. As a result, many
caves have deep layers of debris. The first people moved in and lived
on the rock floor. They threw on the floor whatever they didnt want,
and they tracked in mud; nobody bothered to clean house in those days.
Their debris--junk and mud and garbage and what not--became packed
into a layer. As time went on, and generations passed, the layer grew
thicker. Then there might have been a break in the occupation of the
cave for a while. Perhaps the game animals got scarce and the people
moved away; or maybe the cave became flooded. Later on, other people
moved in and began making a new layer of their own on top of the first
layer. Perhaps this process of layering went on in the same cave for a
hundred thousand years; you can see what happened. The drawing on this
page shows a section through such a cave. The earliest layer is on the
bottom, the latest one on top. They go in order from bottom to top,
earliest to latest. This is the _stratification_ we talked about (p.
12).
[Illustration: SECTION OF SHELTER ON LOWER TERRACE, LE MOUSTIER]
While we may find a mix-up in caves, its not nearly as bad as the
mixing up that was done by glaciers. The animal bones and shells, the
fireplaces, the bones of men, and the tools the men made all belong
together, if they come from one layer. Thats the reason why the cave
of Peking man is so important. It is also the reason why the caves in
Europe and the Near East are so important. We can get an idea of which
things belong together and which lot came earliest and which latest.
In most cases, prehistoric men lived only in the mouths of caves.
They didnt like the dark inner chambers as places to live in. They
preferred rock-shelters, at the bases of overhanging cliffs, if there
was enough overhang to give shelter. When the weather was good, they no
doubt lived in the open air as well. Ill go on using the term cave
since its more familiar, but remember that I really mean rock-shelter,
as a place in which people actually lived.
The most important European cave sites are in Spain, France, and
central Europe; there are also sites in England and Italy. A few caves
are known in the Near East and Africa, and no doubt more sites will be
found when the out-of-the-way parts of Europe, Africa, and Asia are
studied.
AN INDUSTRY DEFINED
We have already seen that the earliest European cave materials are
those from the cave of Fontchevade. Movius feels certain that the
lowest materials here date back well into the third interglacial stage,
that which lay between the Riss (next to the last) and the Wrm I
(first stage of the last) alpine glaciations. This material consists
of an _industry_ of stone tools, apparently all made in the flake
tradition. This is the first time we have used the word industry.
It is useful to call all of the different tools found together in one
layer and made of _one kind of material_ an industry; that is, the
tools must be found together as men left them. Tools taken from the
glacial gravels (or from windswept desert surfaces or river gravels
or any geological deposit) are not together in this sense. We might
say the latter have only geological, not archeological context.
Archeological context means finding things just as men left them. We
can tell what tools go together in an industrial sense only if we
have archeological context.
Up to now, the only things we could have called industries were the
worked stone industry and perhaps the worked (?) bone industry of the
Peking cave. We could add some of the very clear cases of open air
sites, like Olorgesailie. We couldnt use the term for the stone tools
from the glacial gravels, because we do not know which tools belonged
together. But when the cave materials begin to appear in Europe, we can
begin to speak of industries. Most of the European caves of this time
contain industries of flint tools alone.
THE EARLIEST EUROPEAN CAVE LAYERS
Weve just mentioned the industry from what is said to be the oldest
inhabited cave in Europe; that is, the industry from the deepest layer
of the site at Fontchevade. Apparently it doesnt amount to much. The
tools are made of stone, in the flake tradition, and are very poorly
worked. This industry is called _Tayacian_. Its type tool seems to be
a smallish flake tool, but there are also larger flakes which seem to
have been fashioned for hacking. In fact, the type tool seems to be
simply a smaller edition of the Clactonian tool (pictured on p. 45).
None of the Fontchevade tools are really good. There are scrapers,
and more or less pointed tools, and tools that may have been used
for hacking and chopping. Many of the tools from the earlier glacial
gravels are better made than those of this first industry we see in
a European cave. There is so little of this material available that
we do not know which is really typical and which is not. You would
probably find it hard to see much difference between this industry and
a collection of tools of the type called Clactonian, taken from the
glacial gravels, especially if the Clactonian tools were small-sized.
The stone industry of the bottommost layer of the Mount Carmel cave,
in Palestine, where somewhat similar tools were found, has also been
called Tayacian.
I shall have to bring in many unfamiliar words for the names of the
industries. The industries are usually named after the places where
they were first found, and since these were in most cases in France,
most of the names which follow will be of French origin. However,
the names have simply become handles and are in use far beyond the
boundaries of France. It would be better if we had a non-place-name
terminology, but archeologists have not yet been able to agree on such
a terminology.
THE ACHEULEAN INDUSTRY
Both in France and in Palestine, as well as in some African cave
sites, the next layers in the deep caves have an industry in both the
core-biface and the flake traditions. The core-biface tools usually
make up less than half of all the tools in the industry. However,
the name of the biface type of tool is generally given to the whole
industry. It is called the _Acheulean_, actually a late form of it, as
Acheulean is also used for earlier core-biface tools taken from the
glacial gravels. In western Europe, the name used is _Upper Acheulean_
or _Micoquian_. The same terms have been borrowed to name layers E and
F in the Tabun cave, on Mount Carmel in Palestine.
The Acheulean core-biface type of tool is worked on two faces so as
to give a cutting edge all around. The outline of its front view may
be oval, or egg-shaped, or a quite pointed pear shape. The large
chip-scars of the Acheulean core-bifaces are shallow and flat. It is
suspected that this resulted from the removal of the chips with a
wooden club; the deep chip-scars of the earlier Abbevillian core-biface
came from beating the tool against a stone anvil. These tools are
really the best and also the final products of the core-biface
tradition. We first noticed the tradition in the early glacial gravels
(p. 43); now we see its end, but also its finest examples, in the
deeper cave levels.
The flake tools, which really make up the greater bulk of this
industry, are simple scrapers and chips with sharp cutting edges. The
habits used to prepare them must have been pretty much the same as
those used for at least one of the flake industries we shall mention
presently.
There is very little else in these early cave layers. We do not have
a proper industry of bone tools. There are traces of fire, and of
animal bones, and a few shells. In Palestine, there are many more
bones of deer than of gazelle in these layers; the deer lives in a
wetter climate than does the gazelle. In the European cave layers, the
animal bones are those of beasts that live in a warm climate. They
belonged in the last interglacial period. We have not yet found the
bones of fossil men definitely in place with this industry.
[Illustration: ACHEULEAN BIFACE]
FLAKE INDUSTRIES FROM THE CAVES
Two more stone industries--the _Levalloisian_ and the
_Mousterian_--turn up at approximately the same time in the European
cave layers. Their tools seem to be mainly in the flake tradition,
but according to some of the authorities their preparation also shows
some combination with the habits by which the core-biface tools were
prepared.
Now notice that I dont tell you the Levalloisian and the Mousterian
layers are both above the late Acheulean layers. Look at the cave
section (p. 57) and youll find that some Mousterian of Acheulean
tradition appears above some typical Mousterian. This means that
there may be some kinds of Acheulean industries that are later than
some kinds of Mousterian. The same is true of the Levalloisian.
There were now several different kinds of habits that men used in
making stone tools. These habits were based on either one or the other
of the two traditions--core-biface or flake--or on combinations of
the habits used in the preparation techniques of both traditions. All
were popular at about the same time. So we find that people who made
one kind of stone tool industry lived in a cave for a while. Then they
gave up the cave for some reason, and people with another industry
moved in. Then the first people came back--or at least somebody with
the same tool-making habits as the first people. Or maybe a third group
of tool-makers moved in. The people who had these different habits for
making their stone tools seem to have moved around a good deal. They no
doubt borrowed and exchanged tricks of the trade with each other. There
were no patent laws in those days.
The extremely complicated interrelationships of the different habits
used by the tool-makers of this range of time are at last being
systematically studied. M. Franois Bordes has developed a statistical
method of great importance for understanding these tool preparation
habits.
THE LEVALLOISIAN AND MOUSTERIAN
The easiest Levalloisian tool to spot is a big flake tool. The trick
in making it was to fashion carefully a big chunk of stone (called
the Levalloisian tortoise core, because it resembles the shape of
a turtle-shell) and then to whack this in such a way that a large
flake flew off. This large thin flake, with sharp cutting edges, is
the finished Levalloisian tool. There were various other tools in a
Levalloisian industry, but this is the characteristic _Levalloisian_
tool.
There are several typical Mousterian stone tools. Different from
the tools of the Levalloisian type, these were made from disc-like
cores. There are medium-sized flake side scrapers. There are also
some small pointed tools and some small hand axes. The last of these
tool types is often a flake worked on both of the flat sides (that
is, bifacially). There are also pieces of flint worked into the form
of crude balls. The pointed tools may have been fixed on shafts to
make short jabbing spears; the round flint balls may have been used as
bolas. Actually, we dont _know_ what either tool was used for. The
points and side scrapers are illustrated (pp. 64 and 66).
[Illustration: LEVALLOIS FLAKE]
THE MIXING OF TRADITIONS
Nowadays the archeologists are less and less sure of the importance
of any one specific tool type and name. Twenty years ago, they used
to speak simply of Acheulean or Levalloisian or Mousterian tools.
Now, more and more, _all_ of the tools from some one layer in a
cave are called an industry, which is given a mixed name. Thus we
have Levalloiso-Mousterian, and Acheuleo-Levalloisian, and even
Acheuleo-Mousterian (or Mousterian of Acheulean tradition). Bordes
systematic work is beginning to clear up some of our confusion.
The time of these late Acheuleo-Levalloiso-Mousterioid industries
is from perhaps as early as 100,000 years ago. It may have lasted
until well past 50,000 years ago. This was the time of the first
phase of the last great glaciation. It was also the time that the
classic group of Neanderthal men was living in Europe. A number of
the Neanderthal fossil finds come from these cave layers. Before the
different habits of tool preparation were understood it used to be
popular to say Neanderthal man was Mousterian man. I think this is
wrong. What used to be called Mousterian is now known to be a variety
of industries with tools of both core-biface and flake habits, and
so mixed that the word Mousterian used alone really doesnt mean
anything. The Neanderthalers doubtless understood the tool preparation
habits by means of which Acheulean, Levalloisian and Mousterian type
tools were produced. We also have the more modern-like Mount Carmel
people, found in a cave layer of Palestine with tools almost entirely
in the flake tradition, called Levalloiso-Mousterian, and the
Fontchevade-Tayacian (p. 59).
[Illustration: MOUSTERIAN POINT]
OTHER SUGGESTIONS OF LIFE IN THE EARLY CAVE LAYERS
Except for the stone tools, what do we know of the way men lived in the
time range after 100,000 to perhaps 40,000 years ago or even later?
We know that in the area from Europe to Palestine, at least some of
the people (some of the time) lived in the fronts of caves and warmed
themselves over fires. In Europe, in the cave layers of these times,
we find the bones of different animals; the bones in the lowest layers
belong to animals that lived in a warm climate; above them are the
bones of those who could stand the cold, like the reindeer and mammoth.
Thus, the meat diet must have been changing, as the glacier crept
farther south. Shells and possibly fish bones have lasted in these
cave layers, but there is not a trace of the vegetable foods and the
nuts and berries and other wild fruits that must have been eaten when
they could be found.
[Illustration: CHART SHOWING PRESENT UNDERSTANDING OF RELATIONSHIPS AND
SUCCESSION OF TOOL-PREPARATION TRADITIONS, INDUSTRIES, AND ASSEMBLAGES
OF WEST-CENTRAL EUROPE
Wavy lines indicate transitions in industrial habits. These transitions
are not yet understood in detail. The glacial and climatic scheme shown
is the alpine one.]
Bone tools have also been found from this period. Some are called
scrapers, and there are also long chisel-like leg-bone fragments
believed to have been used for skinning animals. Larger hunks of bone,
which seem to have served as anvils or chopping blocks, are fairly
common.
Bits of mineral, used as coloring matter, have also been found. We
dont know what the color was used for.
[Illustration: MOUSTERIAN SIDE SCRAPER]
There is a small but certain number of cases of intentional burials.
These burials have been found on the floors of the caves; in other
words, the people dug graves in the places where they lived. The holes
made for the graves were small. For this reason (or perhaps for some
other?) the bodies were in a curled-up or contracted position. Flint or
bone tools or pieces of meat seem to have been put in with some of the
bodies. In several cases, flat stones had been laid over the graves.
TOOLS FROM AFRICA AND ASIA ABOUT 100,000 YEARS AGO
Professor Movius characterizes early prehistoric Africa as a continent
showing a variety of stone industries. Some of these industries were
purely local developments and some were practically identical with
industries found in Europe at the same time. From northwest Africa
to Capetown--excepting the tropical rain forest region of the west
center--tools of developed Acheulean, Levalloisian, and Mousterian
types have been recognized. Often they are named after African place
names.
In east and south Africa lived people whose industries show a
development of the Levalloisian technique. Such industries are
called Stillbay. Another industry, developed on the basis of the
Acheulean technique, is called Fauresmith. From the northwest comes
an industry with tanged points and flake-blades; this is called the
Aterian. The tropical rain forest region contained people whose stone
tools apparently show adjustment to this peculiar environment; the
so-called Sangoan industry includes stone picks, adzes, core-bifaces
of specialized Acheulean type, and bifacial points which were probably
spearheads.
In western Asia, even as far as the east coast of India, the tools of
the Eurafrican core-biface and flake tool traditions continued to be
used. But in the Far East, as we noted in the last chapter, men had
developed characteristic stone chopper and chopping tools. This tool
preparation tradition--basically a pebble tool tradition--lasted to the
very end of the Ice Age.
When more intact open air sites such as that of an earlier time at
Olorgesailie, and more stratified cave sites are found and excavated
in Asia and Africa, we shall be able to get a more complete picture.
So far, our picture of the general cultural level of the Old World at
about 100,000 years ago--and soon afterwards--is best from Europe, but
it is still far from complete there, too.
CULTURE AT THE BEGINNING OF THE LAST GREAT GLACIAL PERIOD
The few things we have found must indicate only a very small part
of the total activities of the people who lived at the time. All of
the things they made of wood and bark, of skins, of anything soft,
are gone. The fact that burials were made, at least in Europe and
Palestine, is pretty clear proof that the people had some notion of a
life after death. But what this notion really was, or what gods (if
any) men believed in, we cannot know. Dr. Movius has also reminded me
of the so-called bear cults--cases in which caves have been found which
contain the skulls of bears in apparently purposeful arrangement. This
might suggest some notion of hoarding up the spirits or the strength of
bears killed in the hunt. Probably the people lived in small groups,
as hunting and food-gathering seldom provide enough food for large
groups of people. These groups probably had some kind of leader or
chief. Very likely the rude beginnings of rules for community life
and politics, and even law, were being made. But what these were, we
do not know. We can only guess about such things, as we can only guess
about many others; for example, how the idea of a family must have been
growing, and how there may have been witch doctors who made beginnings
in medicine or in art, in the materials they gathered for their trade.
The stone tools help us most. They have lasted, and we can find
them. As they come to us, from this cave or that, and from this
layer or that, the tool industries show a variety of combinations
of the different basic habits or traditions of tool preparation.
This seems only natural, as the groups of people must have been very
small. The mixtures and blendings of the habits used in making stone
tools must mean that there were also mixtures and blends in many of
the other ideas and beliefs of these small groups. And what this
probably means is that there was no one _culture_ of the time. It is
certainly unlikely that there were simply three cultures, Acheulean,
Levalloisian, and Mousterian, as has been thought in the past.
Rather there must have been a great variety of loosely related cultures
at about the same stage of advancement. We could say, too, that here
we really begin to see, for the first time, that remarkable ability
of men to adapt themselves to a variety of conditions. We shall see
this adaptive ability even more clearly as time goes on and the record
becomes more complete.
Over how great an area did these loosely related cultures reach in
the time 75,000 to 45,000 or even as late as 35,000 years ago? We
have described stone tools made in one or another of the flake and
core-biface habits, for an enormous area. It covers all of Europe, all
of Africa, the Near East, and parts of India. It is perfectly possible
that the flake and core-biface habits lasted on after 35,000 years ago,
in some places outside of Europe. In northern Africa, for example, we
are certain that they did (see chart, p. 72).
On the other hand, in the Far East (China, Burma, Java) and in northern
India, the tools of the old chopper-tool tradition were still being
made. Out there, we must assume, there was a different set of loosely
related cultures. At least, there was a different set of loosely
related habits for the making of tools. But the men who made them must
have looked much like the men of the West. Their tools were different,
but just as useful.
As to what the men of the West looked like, Ive already hinted at all
we know so far (pp. 29 ff.). The Neanderthalers were present at
the time. Some more modern-like men must have been about, too, since
fossils of them have turned up at Mount Carmel in Palestine, and at
Teshik Tash, in Trans-caspian Russia. It is still too soon to know
whether certain combinations of tools within industries were made
only by certain physical types of men. But since tools of both the
core-biface and the flake traditions, and their blends, turn up from
South Africa to England to India, it is most unlikely that only one
type of man used only one particular habit in the preparation of tools.
What seems perfectly clear is that men in Africa and men in India were
making just as good tools as the men who lived in western Europe.
EARLY MODERNS
[Illustration]
From some time during the first inter-stadial of the last great
glaciation (say some time after about 40,000 years ago), we have
more accurate dates for the European-Mediterranean area and less
accurate ones for the rest of the Old World. This is probably
because the effects of the last glaciation have been studied in the
European-Mediterranean area more than they have been elsewhere.
A NEW TRADITION APPEARS
Something new was probably beginning to happen in the
European-Mediterranean area about 40,000 years ago, though all the
rest of the Old World seems to have been going on as it had been. I
cant be sure of this because the information we are using as a basis
for dates is very inaccurate for the areas outside of Europe and the
Mediterranean.
We can at least make a guess. In Egypt and north Africa, men were still
using the old methods of making stone tools. This was especially true
of flake tools of the Levalloisian type, save that they were growing
smaller and smaller as time went on. But at the same time, a new
tradition was becoming popular in westernmost Asia and in Europe. This
was the blade-tool tradition.
BLADE TOOLS
A stone blade is really just a long parallel-sided flake, as the
drawing shows. It has sharp cutting edges, and makes a very useful
knife. The real trick is to be able to make one. It is almost
impossible to make a blade out of any stone but flint or a natural
volcanic glass called obsidian. And even if you have flint or obsidian,
you first have to work up a special cone-shaped blade-core, from
which to whack off blades.
[Illustration: PLAIN BLADE]
You whack with a hammer stone against a bone or antler punch which is
directed at the proper place on the blade-core. The blade-core has to
be well supported or gripped while this is going on. To get a good
flint blade tool takes a great deal of know-how.
Remember that a tradition in stone tools means no more than that some
particular way of making the tools got started and lasted a long time.
Men who made some tools in one tradition or set of habits would also
make other tools for different purposes by means of another tradition
or set of habits. It was even possible for the two sets of habits to
become combined.
THE EARLIEST BLADE TOOLS
The oldest blade tools we have found were deep down in the layers of
the Mount Carmel caves, in Tabun Eb and Ea. Similar tools have been
found in equally early cave levels in Syria; their popularity there
seems to fluctuate a bit. Some more or less parallel-sided flakes are
known in the Levalloisian industry in France, but they are probably
no earlier than Tabun E. The Tabun blades are part of a local late
Acheulean industry, which is characterized by core-biface hand
axes, but which has many flake tools as well. Professor F. E.
Zeuner believes that this industry may be more than 120,000 years old;
actually its date has not yet been fixed, but it is very old--older
than the fossil finds of modern-like men in the same caves.
[Illustration: SUCCESSION OF ICE AGE FLINT TYPES, INDUSTRIES, AND
ASSEMBLAGES, AND OF FOSSIL MEN, IN NORTHWESTERN EURAFRASIA]
For some reason, the habit of making blades in Palestine and Syria was
interrupted. Blades only reappeared there at about the same time they
were first made in Europe, some time after 45,000 years ago; that is,
after the first phase of the last glaciation was ended.
[Illustration: BACKED BLADE]
We are not sure just where the earliest _persisting_ habits for the
production of blade tools developed. Impressed by the very early
momentary appearance of blades at Tabun on Mount Carmel, Professor
Dorothy A. Garrod first favored the Near East as a center of origin.
She spoke of some as yet unidentified Asiatic centre, which she
thought might be in the highlands of Iran or just beyond. But more
recent work has been done in this area, especially by Professor Coon,
and the blade tools do not seem to have an early appearance there. When
the blade tools reappear in the Syro-Palestinian area, they do so in
industries which also include Levalloiso-Mousterian flake tools. From
the point of view of form and workmanship, the blade tools themselves
are not so fine as those which seem to be making their appearance
in western Europe about the same time. There is a characteristic
Syro-Palestinian flake point, possibly a projectile tip, called the
Emiran, which is not known from Europe. The appearance of blade tools,
together with Levalloiso-Mousterian flakes, continues even after the
Emiran point has gone out of use.
It seems clear that the production of blade tools did not immediately
swamp the set of older habits in Europe, too; the use of flake
tools also continued there. This was not so apparent to the older
archeologists, whose attention was focused on individual tool types. It
is not, in fact, impossible--although it is certainly not proved--that
the technique developed in the preparation of the Levalloisian tortoise
core (and the striking of the Levalloisian flake from it) might have
followed through to the conical core and punch technique for the
production of blades. Professor Garrod is much impressed with the speed
of change during the later phases of the last glaciation, and its
probable consequences. She speaks of the greater number of industries
having enough individual character to be classified as distinct ...
since evolution now starts to outstrip diffusion. Her evolution here
is of course an industrial evolution rather than a biological one.
Certainly the people of Europe had begun to make blade tools during
the warm spell after the first phase of the last glaciation. By about
40,000 years ago blades were well established. The bones of the blade
tool makers weve found so far indicate that anatomically modern men
had now certainly appeared. Unfortunately, only a few fossil men have
so far been found from the very beginning of the blade tool range in
Europe (or elsewhere). What I certainly shall _not_ tell you is that
conquering bands of fine, strong, anatomically modern men, armed with
superior blade tools, came sweeping out of the East to exterminate the
lowly Neanderthalers. Even if we dont know exactly what happened, Id
lay a good bet it wasnt that simple.
We do know a good deal about different blade industries in Europe.
Almost all of them come from cave layers. There is a great deal of
complication in what we find. The chart (p. 72) tries to simplify
this complication; in fact, it doubtless simplifies it too much. But
it may suggest all the complication of industries which is going
on at this time. You will note that the upper portion of my much
simpler chart (p. 65) covers the same material (in the section
marked Various Blade-Tool Industries). That chart is certainly too
simplified.
You will realize that all this complication comes not only from
the fact that we are finding more material. It is due also to the
increasing ability of men to adapt themselves to a great variety of
situations. Their tools indicate this adaptiveness. We know there was
a good deal of climatic change at this time. The plants and animals
that men used for food were changing, too. The great variety of tools
and industries we now find reflect these changes and the ability of men
to keep up with the times. Now, for example, is the first time we are
sure that there are tools to _make_ other tools. They also show mens
increasing ability to adapt themselves.
SPECIAL TYPES OF BLADE TOOLS
The most useful tools that appear at this time were made from blades.
1. The backed blade. This is a knife made of a flint blade, with
one edge purposely blunted, probably to save the users fingers
from being cut. There are several shapes of backed blades (p.
73).
[Illustration: TWO BURINS]
2. The _burin_ or graver. The burin was the original chisel. Its
cutting edge is _transverse_, like a chisels. Some burins are
made like a screw-driver, save that burins are sharp. Others have
edges more like the blade of a chisel or a push plane, with
only one bevel. Burins were probably used to make slots in wood
and bone; that is, to make handles or shafts for other tools.
They must also be the tools with which much of the engraving on
bone (see p. 83) was done. There is a bewildering variety of
different kinds of burins.
[Illustration: TANGED POINT]
3. The tanged point. These stone points were used to tip arrows or
light spears. They were made from blades, and they had a long tang
at the bottom where they were fixed to the shaft. At the place
where the tang met the main body of the stone point, there was
a marked shoulder, the beginnings of a barb. Such points had
either one or two shoulders.
[Illustration: NOTCHED BLADE]
4. The notched or strangulated blade. Along with the points for
arrows or light spears must go a tool to prepare the arrow or
spear shaft. Today, such a tool would be called a draw-knife or
a spoke-shave, and this is what the notched blades probably are.
Our spoke-shaves have sharp straight cutting blades and really
shave. Notched blades of flint probably scraped rather than cut.
5. The awl, drill, or borer. These blade tools are worked out
to a spike-like point. They must have been used for making holes
in wood, bone, shell, skin, or other things.
[Illustration: DRILL OR AWL]
6. The end-scraper on a blade is a tool with one or both ends
worked so as to give a good scraping edge. It could have been used
to hollow out wood or bone, scrape hides, remove bark from trees,
and a number of other things (p. 78).
There is one very special type of flint tool, which is best known from
western Europe in an industry called the Solutrean. These tools were
usually made of blades, but the best examples are so carefully worked
on both sides (bifacially) that it is impossible to see the original
blade. This tool is
7. The laurel leaf point. Some of these tools were long and
dagger-like, and must have been used as knives or daggers. Others
were small, called willow leaf, and must have been mounted on
spear or arrow shafts. Another typical Solutrean tool is the
shouldered point. Both the laurel leaf and shouldered point
types are illustrated (see above and p. 79).
[Illustration: END-SCRAPER ON A BLADE]
[Illustration: LAUREL LEAF POINT]
The industries characterized by tools in the blade tradition also
yield some flake and core tools. We will end this list with two types
of tools that appear at this time. The first is made of a flake; the
second is a core tool.
[Illustration: SHOULDERED POINT]
8. The keel-shaped round scraper is usually small and quite round,
and has had chips removed up to a peak in the center. It is called
keel-shaped because it is supposed to look (when upside down)
like a section through a boat. Actually, it looks more like a tent
or an umbrella. Its outer edges are sharp all the way around, and
it was probably a general purpose scraping tool (see illustration,
p. 81).
9. The keel-shaped nosed scraper is a much larger and heavier tool
than the round scraper. It was made on a core with a flat bottom,
and has one nicely worked end or nose. Such tools are usually
large enough to be easily grasped, and probably were used like
push planes (see illustration, p. 81).
[Illustration: KEEL-SHAPED ROUND SCRAPER]
[Illustration: KEEL-SHAPED NOSED SCRAPER]
The stone tools (usually made of flint) we have just listed are among
the most easily recognized blade tools, although they show differences
in detail at different times. There are also many other kinds. Not
all of these tools appear in any one industry at one time. Thus the
different industries shown in the chart (p. 72) each have only some
of the blade tools weve just listed, and also a few flake tools. Some
industries even have a few core tools. The particular types of blade
tools appearing in one cave layer or another, and the frequency of
appearance of the different types, tell which industry we have in each
layer.
OTHER KINDS OF TOOLS
By this time in Europe--say from about 40,000 to about 10,000 years
ago--we begin to find other kinds of material too. Bone tools begin
to appear. There are knives, pins, needles with eyes, and little
double-pointed straight bars of bone that were probably fish-hooks. The
fish-line would have been fastened in the center of the bar; when the
fish swallowed the bait, the bar would have caught cross-wise in the
fishs mouth.
One quite special kind of bone tool is a long flat point for a light
spear. It has a deep notch cut up into the breadth of its base, and is
called a split-based bone point (p. 82). We know examples of bone
beads from these times, and of bone handles for flint tools. Pierced
teeth of some animals were worn as beads or pendants, but I am not sure
that elks teeth were worn this early. There are even spool-shaped
buttons or toggles.
[Illustration: SPLIT-BASED BONE POINT]
[Illustration: SPEAR-THROWER]
[Illustration: BONE HARPOON]
Antler came into use for tools, especially in central and western
Europe. We do not know the use of one particular antler tool that
has a large hole bored in one end. One suggestion is that it was
a thong-stropper used to strop or work up hide thongs (see
illustration, below); another suggestion is that it was an arrow-shaft
straightener.
Another interesting tool, usually of antler, is the spear-thrower,
which is little more than a stick with a notch or hook on one end.
The hook fits into the butt end of the spear, and the length of the
spear-thrower allows you to put much more power into the throw (p.
82). It works on pretty much the same principle as the sling.
Very fancy harpoons of antler were also made in the latter half of
the period in western Europe. These harpoons had barbs on one or both
sides and a base which would slip out of the shaft (p. 82). Some have
engraved decoration.
THE BEGINNING OF ART
[Illustration: THONG-STROPPER]
In western Europe, at least, the period saw the beginning of several
kinds of art work. It is handy to break the art down into two great
groups: the movable art, and the cave paintings and sculpture. The
movable art group includes the scratchings, engravings, and modeling
which decorate tools and weapons. Knives, stroppers, spear-throwers,
harpoons, and sometimes just plain fragments of bone or antler are
often carved. There is also a group of large flat pebbles which seem
almost to have served as sketch blocks. The surfaces of these various
objects may show animals, or rather abstract floral designs, or
geometric designs.
[Illustration: VENUS FIGURINE FROM WILLENDORF]
Some of the movable art is not done on tools. The most remarkable
examples of this class are little figures of women. These women seem to
be pregnant, and their most female characteristics are much emphasized.
It is thought that these Venus or Mother-goddess figurines may be
meant to show the great forces of nature--fertility and the birth of
life.
CAVE PAINTINGS
In the paintings on walls and ceilings of caves we have some examples
that compare with the best art of any time. The subjects were usually
animals, the great cold-weather beasts of the end of the Ice Age: the
mammoth, the wooly rhinoceros, the bison, the reindeer, the wild horse,
the bear, the wild boar, and wild cattle. As in the movable art, there
are different styles in the cave art. The really great cave art is
pretty well restricted to southern France and Cantabrian (northwestern)
Spain.
There are several interesting things about the Franco-Cantabrian cave
art. It was done deep down in the darkest and most dangerous parts of
the caves, although the men lived only in the openings of caves. If you
think what they must have had for lights--crude lamps of hollowed stone
have been found, which must have burned some kind of oil or grease,
with a matted hair or fiber wick--and of the animals that may have
lurked in the caves, youll understand the part about danger. Then,
too, were sure the pictures these people painted were not simply to be
looked at and admired, for they painted one picture right over other
pictures which had been done earlier. Clearly, it was the _act_ of
_painting_ that counted. The painter had to go way down into the most
mysterious depths of the earth and create an animal in paint. Possibly
he believed that by doing this he gained some sort of magic power over
the same kind of animal when he hunted it in the open air. It certainly
doesnt look as if he cared very much about the picture he painted--as
a finished product to be admired--for he or somebody else soon went
down and painted another animal right over the one he had done.
The cave art of the Franco-Cantabrian style is one of the great
artistic achievements of all time. The subjects drawn are almost always
the larger animals of the time: the bison, wild cattle and horses, the
wooly rhinoceros, the mammoth, the wild boar, and the bear. In some of
the best examples, the beasts are drawn in full color and the paintings
are remarkably alive and charged with energy. They come from the hands
of men who knew the great animals well--knew the feel of their fur, the
tremendous drive of their muscles, and the danger one faced when he
hunted them.
Another artistic style has been found in eastern Spain. It includes
lively drawings, often of people hunting with bow and arrow. The East
Spanish art is found on open rock faces and in rock-shelters. It is
less spectacular and apparently more recent than the Franco-Cantabrian
cave art.
LIFE AT THE END OF THE ICE AGE IN EUROPE
Life in these times was probably as good as a hunter could expect it
to be. Game and fish seem to have been plentiful; berries and wild
fruits probably were, too. From France to Russia, great pits or
piles of animal bones have been found. Some of this killing was done
as our Plains Indians killed the buffalo--by stampeding them over
steep river banks or cliffs. There were also good tools for hunting,
however. In western Europe, people lived in the openings of caves and
under overhanging rocks. On the great plains of eastern Europe, very
crude huts were being built, half underground. The first part of this
time must have been cold, for it was the middle and end phases of the
last great glaciation. Northern Europe from Scotland to Scandinavia,
northern Germany and Russia, and also the higher mountains to the
south, were certainly covered with ice. But people had fire, and the
needles and tools that were used for scraping hides must mean that they
wore clothing.
It is clear that men were thinking of a great variety of things beside
the tools that helped them get food and shelter. Such burials as we
find have more grave-gifts than before. Beads and ornaments and often
flint, bone, or antler tools are included in the grave, and sometimes
the body is sprinkled with red ochre. Red is the color of blood, which
means life, and of fire, which means heat. Professor Childe wonders if
the red ochre was a pathetic attempt at magic--to give back to the body
the heat that had gone from it. But pathetic or not, it is sure proof
that these people were already moved by death as men still are moved by
it.
Their art is another example of the direction the human mind was
taking. And when I say human, I mean it in the fullest sense, for this
is the time in which fully modern man has appeared. On page 34, we
spoke of the Cro-Magnon group and of the Combe Capelle-Brnn group of
Caucasoids and of the Grimaldi Negroids, who are no longer believed
to be Negroid. I doubt that any one of these groups produced most of
the achievements of the times. Its not yet absolutely sure which
particular group produced the great cave art. The artists were almost
certainly a blend of several (no doubt already mixed) groups. The pair
of Grimaldians were buried in a grave with a sprinkling of red ochre,
and were provided with shell beads and ornaments and with some blade
tools of flint. Regardless of the different names once given them by
the human paleontologists, each of these groups seems to have shared
equally in the cultural achievements of the times, for all that the
archeologists can say.
MICROLITHS
One peculiar set of tools seems to serve as a marker for the very last
phase of the Ice Age in southwestern Europe. This tool-making habit is
also found about the shore of the Mediterranean basin, and it moved
into northern Europe as the last glaciation pulled northward. People
began making blade tools of very small size. They learned how to chip
very slender and tiny blades from a prepared core. Then they made these
little blades into tiny triangles, half-moons (lunates), trapezoids,
and several other geometric forms. These little tools are called
microliths. They are so small that most of them must have been fixed
in handles or shafts.
[Illustration: MICROLITHS
BLADE FRAGMENT
BURIN
LUNATE
TRAPEZOID
SCALENE TRIANGLE
ARROWHEAD]
We have found several examples of microliths mounted in shafts. In
northern Europe, where their use soon spread, the microlithic triangles
or lunates were set in rows down each side of a bone or wood point.
One corner of each little triangle stuck out, and the whole thing
made a fine barbed harpoon. In historic times in Egypt, geometric
trapezoidal microliths were still in use as arrowheads. They were
fastened--broad end out--on the end of an arrow shaft. It seems queer
to give an arrow a point shaped like a T. Actually, the little points
were very sharp, and must have pierced the hides of animals very
easily. We also think that the broader cutting edge of the point may
have caused more bleeding than a pointed arrowhead would. In hunting
fleet-footed animals like the gazelle, which might run for miles after
being shot with an arrow, it was an advantage to cause as much bleeding
as possible, for the animal would drop sooner.
We are not really sure where the microliths were first invented. There
is some evidence that they appear early in the Near East. Their use
was very common in northwest Africa but this came later. The microlith
makers who reached south Russia and central Europe possibly moved up
out of the Near East. Or it may have been the other way around; we
simply dont yet know.
Remember that the microliths we are talking about here were made from
carefully prepared little blades, and are often geometric in outline.
Each microlithic industry proper was made up, in good part, of such
tiny blade tools. But there were also some normal-sized blade tools and
even some flake scrapers, in most microlithic industries. I emphasize
this bladelet and the geometric character of the microlithic industries
of the western Old World, since there has sometimes been confusion in
the matter. Sometimes small flake chips, utilized as minute pointed
tools, have been called microliths. They may be _microlithic_ in size
in terms of the general meaning of the word, but they do not seem to
belong to the sub-tradition of the blade tool preparation habits which
we have been discussing here.
LATER BLADE-TOOL INDUSTRIES OF THE NEAR EAST AND AFRICA
The blade-tool industries of normal size we talked about earlier spread
from Europe to central Siberia. We noted that blade tools were made
in western Asia too, and early, although Professor Garrod is no longer
sure that the whole tradition originated in the Near East. If you look
again at my chart (p. 72) you will note that in western Asia I list
some of the names of the western European industries, but with the
qualification -like (for example, Gravettian-like). The western
Asiatic blade-tool industries do vaguely recall some aspects of those
of western Europe, but we would probably be better off if we used
completely local names for them. The Emiran of my chart is such an
example; its industry includes a long spike-like blade point which has
no western European counterpart.
When we last spoke of Africa (p. 66), I told you that stone tools
there were continuing in the Levalloisian flake tradition, and were
becoming smaller. At some time during this process, two new tool
types appeared in northern Africa: one was the Aterian point with
a tang (p. 67), and the other was a sort of laurel leaf point,
called the Sbaikian. These two tool types were both produced from
flakes. The Sbaikian points, especially, are roughly similar to some
of the Solutrean points of Europe. It has been suggested that both the
Sbaikian and Aterian points may be seen on their way to France through
their appearance in the Spanish cave deposits of Parpallo, but there is
also a rival pre-Solutrean in central Europe. We still do not know
whether there was any contact between the makers of these north African
tools and the Solutrean tool-makers. What does seem clear is that the
blade-tool tradition itself arrived late in northern Africa.
NETHER AFRICA
Blade tools and laurel leaf points and some other probably late
stone tool types also appear in central and southern Africa. There
are geometric microliths on bladelets and even some coarse pottery in
east Africa. There is as yet no good way of telling just where these
items belong in time; in broad geological terms they are late.
Some people have guessed that they are as early as similar European
and Near Eastern examples, but I doubt it. The makers of small-sized
Levalloisian flake tools occupied much of Africa until very late in
time.
THE FAR EAST
India and the Far East still seem to be going their own way. In India,
some blade tools have been found. These are not well dated, save that
we believe they must be post-Pleistocene. In the Far East it looks as
if the old chopper-tool tradition was still continuing. For Burma,
Dr. Movius feels this is fairly certain; for China he feels even more
certain. Actually, we know very little about the Far East at about the
time of the last glaciation. This is a shame, too, as you will soon
agree.
THE NEW WORLD BECOMES INHABITED
At some time toward the end of the last great glaciation--almost
certainly after 20,000 years ago--people began to move over Bering
Strait, from Asia into America. As you know, the American Indians have
been assumed to be basically Mongoloids. New studies of blood group
types make this somewhat uncertain, but there is no doubt that the
ancestors of the American Indians came from Asia.
The stone-tool traditions of Europe, Africa, the Near and Middle East,
and central Siberia, did _not_ move into the New World. With only a
very few special or late exceptions, there are _no_ core-bifaces,
flakes, or blade tools of the Old World. Such things just havent been
found here.
This is why I say its a shame we dont know more of the end of the
chopper-tool tradition in the Far East. According to Weidenreich,
the Mongoloids were in the Far East long before the end of the last
glaciation. If the genetics of the blood group types do demand a
non-Mongoloid ancestry for the American Indians, who else may have been
in the Far East 25,000 years ago? We know a little about the habits
for making stone tools which these first people brought with them,
and these habits dont conform with those of the western Old World.
Wed better keep our eyes open for whatever happened to the end of
the chopper-tool tradition in northern China; already there are hints
that it lasted late there. Also we should watch future excavations
in eastern Siberia. Perhaps we shall find the chopper-tool tradition
spreading up that far.
THE NEW ERA
Perhaps it comes in part from the way I read the evidence and perhaps
in part it is only intuition, but I feel that the materials of this
chapter suggest a new era in the ways of life. Before about 40,000
years ago, people simply gathered their food, wandering over large
areas to scavenge or to hunt in a simple sort of way. But here we
have seen them settling-in more, perhaps restricting themselves in
their wanderings and adapting themselves to a given locality in more
intensive ways. This intensification might be suggested by the word
collecting. The ways of life we described in the earlier chapters
were food-gathering ways, but now an era of food-collecting has
begun. We shall see further intensifications of it in the next chapter.
End and PRELUDE
[Illustration]
Up to the end of the last glaciation, we prehistorians have a
relatively comfortable time schedule. The farther back we go the less
exact we can be about time and details. Elbow-room of five, ten,
even fifty or more thousands of years becomes available for us to
maneuver in as we work backward in time. But now our story has come
forward to the point where more exact methods of dating are at hand.
The radioactive carbon method reaches back into the span of the last
glaciation. There are other methods, developed by the geologists and
paleobotanists, which supplement and extend the usefulness of the
radioactive carbon dates. And, happily, as our means of being more
exact increases, our story grows more exciting. There are also more
details of culture for us to deal with, which add to the interest.
CHANGES AT THE END OF THE ICE AGE
The last great glaciation of the Ice Age was a two-part affair, with a
sub-phase at the end of the second part. In Europe the last sub-phase
of this glaciation commenced somewhere around 15,000 years ago. Then
the glaciers began to melt back, for the last time. Remember that
Professor Antevs (p. 19) isnt sure the Ice Age is over yet! This
melting sometimes went by fits and starts, and the weather wasnt
always changing for the better; but there was at least one time when
European weather was even better than it is now.
The melting back of the glaciers and the weather fluctuations caused
other changes, too. We know a fair amount about these changes in
Europe. In an earlier chapter, we said that the whole Ice Age was a
matter of continual change over long periods of time. As the last
glaciers began to melt back some interesting things happened to mankind.
In Europe, along with the melting of the last glaciers, geography
itself was changing. Britain and Ireland had certainly become islands
by 5000 B.C. The Baltic was sometimes a salt sea, sometimes a large
fresh-water lake. Forests began to grow where the glaciers had been,
and in what had once been the cold tundra areas in front of the
glaciers. The great cold-weather animals--the mammoth and the wooly
rhinoceros--retreated northward and finally died out. It is probable
that the efficient hunting of the earlier people of 20,000 or 25,000
to about 12,000 years ago had helped this process along (see p. 86).
Europeans, especially those of the post-glacial period, had to keep
changing to keep up with the times.
The archeological materials for the time from 10,000 to 6000 B.C. seem
simpler than those of the previous five thousand years. The great cave
art of France and Spain had gone; so had the fine carving in bone and
antler. Smaller, speedier animals were moving into the new forests. New
ways of hunting them, or ways of getting other food, had to be found.
Hence, new tools and weapons were necessary. Some of the people who
moved into northern Germany were successful reindeer hunters. Then the
reindeer moved off to the north, and again new sources of food had to
be found.
THE READJUSTMENTS COMPLETED IN EUROPE
After a few thousand years, things began to look better. Or at least
we can say this: By about 6000 B.C. we again get hotter archeological
materials. The best of these come from the north European area:
Britain, Belgium, Holland, Denmark, north Germany, southern Norway and
Sweden. Much of this north European material comes from bogs and swamps
where it had become water-logged and has kept very well. Thus we have
much more complete _assemblages_[4] than for any time earlier.
[4] Assemblage is a useful word when there are different kinds of
archeological materials belonging together, from one area and of
one time. An assemblage is made up of a number of industries
(that is, all the tools in chipped stone, all the tools in
bone, all the tools in wood, the traces of houses, etc.) and
everything else that manages to survive, such as the art, the
burials, the bones of the animals used as food, and the traces
of plant foods; in fact, everything that has been left to us
and can be used to help reconstruct the lives of the people to
whom it once belonged. Our own present-day assemblage would be
the sum total of all the objects in our mail-order catalogues,
department stores and supply houses of every sort, our churches,
our art galleries and other buildings, together with our roads,
canals, dams, irrigation ditches, and any other traces we might
leave of ourselves, from graves to garbage dumps. Not everything
would last, so that an archeologist digging us up--say 2,000
years from now--would find only the most durable items in our
assemblage.
The best known of these assemblages is the _Maglemosian_, named after a
great Danish peat-swamp where much has been found.
[Illustration: SKETCH OF MAGLEMOSIAN ASSEMBLAGE
CHIPPED STONE
HEMP
GROUND STONE
BONE AND ANTLER
WOOD]
In the Maglemosian assemblage the flint industry was still very
important. Blade tools, tanged arrow points, and burins were still
made, but there were also axes for cutting the trees in the new
forests. Moreover, the tiny microlithic blades, in a variety of
geometric forms, are also found. Thus, a specialized tradition that
possibly began east of the Mediterranean had reached northern Europe.
There was also a ground stone industry; some axes and club-heads were
made by grinding and polishing rather than by chipping. The industries
in bone and antler show a great variety of tools: axes, fish-hooks,
fish spears, handles and hafts for other tools, harpoons, and clubs.
A remarkable industry in wood has been preserved. Paddles, sled
runners, handles for tools, and bark floats for fish-nets have been
found. There are even fish-nets made of plant fibers. Canoes of some
kind were no doubt made. Bone and antler tools were decorated with
simple patterns, and amber was collected. Wooden bows and arrows are
found.
It seems likely that the Maglemosian bog finds are remains of summer
camps, and that in winter the people moved to higher and drier regions.
Childe calls them the Forest folk; they probably lived much the
same sort of life as did our pre-agricultural Indians of the north
central states. They hunted small game or deer; they did a great deal
of fishing; they collected what plant food they could find. In fact,
their assemblage shows us again that remarkable ability of men to adapt
themselves to change. They had succeeded in domesticating the dog; he
was still a very wolf-like dog, but his long association with mankind
had now begun. Professor Coon believes that these people were direct
descendants of the men of the glacial age and that they had much the
same appearance. He believes that most of the Ice Age survivors still
extant are living today in the northwestern European area.
SOUTH AND CENTRAL EUROPE PERHAPS AS READJUSTED AS THE NORTH
There is always one trouble with things that come from areas where
preservation is exceptionally good: The very quantity of materials in
such an assemblage tends to make things from other areas look poor
and simple, although they may not have been so originally at all. The
assemblages of the people who lived to the south of the Maglemosian
area may also have been quite large and varied; but, unfortunately,
relatively little of the southern assemblages has lasted. The
water-logged sites of the Maglemosian area preserved a great deal
more. Hence the Maglemosian itself _looks_ quite advanced to us, when
we compare it with the few things that have happened to last in other
areas. If we could go back and wander over the Europe of eight thousand
years ago, we would probably find that the peoples of France, central
Europe, and south central Russia were just as advanced as those of the
north European-Baltic belt.
South of the north European belt the hunting-food-collecting peoples
were living on as best they could during this time. One interesting
group, which seems to have kept to the regions of sandy soil and scrub
forest, made great quantities of geometric microliths. These are the
materials called _Tardenoisian_. The materials of the Forest folk of
France and central Europe generally are called _Azilian_; Dr. Movius
believes the term might best be restricted to the area south of the
Loire River.
HOW MUCH REAL CHANGE WAS THERE?
You can see that no really _basic_ change in the way of life has yet
been described. Childe sees the problem that faced the Europeans of
10,000 to 3000 B.C. as a problem in readaptation to the post-glacial
forest environment. By 6000 B.C. some quite successful solutions of
the problem--like the Maglemosian--had been made. The upsets that came
with the melting of the last ice gradually brought about all sorts of
changes in the tools and food-getting habits, but the people themselves
were still just as much simple hunters, fishers, and food-collectors as
they had been in 25,000 B.C. It could be said that they changed just
enough so that they would not have to change. But there is a bit more
to it than this.
Professor Mathiassen of Copenhagen, who knows the archeological remains
of this time very well, poses a question. He speaks of the material
as being neither rich nor progressive, in fact rather stagnant, but
he goes on to add that the people had a certain receptiveness and
were able to adapt themselves quickly when the next change did come.
My own understanding of the situation is that the Forest folk made
nothing as spectacular as had the producers of the earlier Magdalenian
assemblage and the Franco-Cantabrian art. On the other hand, they
_seem_ to have been making many more different kinds of tools for many
more different kinds of tasks than had their Ice Age forerunners. I
emphasize seem because the preservation in the Maglemosian bogs
is very complete; certainly we cannot list anywhere near as many
different things for earlier times as we did for the Maglemosians
(p. 94). I believe this experimentation with all kinds of new tools
and gadgets, this intensification of adaptiveness (p. 91), this
receptiveness, even if it is still only pointed toward hunting,
fishing, and food-collecting, is an important thing.
Remember that the only marker we have handy for the _beginning_ of
this tendency toward receptiveness and experimentation is the
little microlithic blade tools of various geometric forms. These, we
saw, began before the last ice had melted away, and they lasted on
in use for a very long time. I wish there were a better marker than
the microliths but I do not know of one. Remember, too, that as yet
we can only use the microliths as a marker in Europe and about the
Mediterranean.
CHANGES IN OTHER AREAS?
All this last section was about Europe. How about the rest of the world
when the last glaciers were melting away?
We simply dont know much about this particular time in other parts
of the world except in Europe, the Mediterranean basin and the Middle
East. People were certainly continuing to move into the New World by
way of Siberia and the Bering Strait about this time. But for the
greater part of Africa and Asia, we do not know exactly what was
happening. Some day, we shall no doubt find out; today we are without
clear information.
REAL CHANGE AND PRELUDE IN THE NEAR EAST
The appearance of the microliths and the developments made by the
Forest folk of northwestern Europe also mark an end. They show us
the terminal phase of the old food-collecting way of life. It grows
increasingly clear that at about the same time that the Maglemosian and
other Forest folk were adapting themselves to hunting, fishing, and
collecting in new ways to fit the post-glacial environment, something
completely new was being made ready in western Asia.
Unfortunately, we do not have as much understanding of the climate and
environment of the late Ice Age in western Asia as we have for most
of Europe. Probably the weather was never so violent or life quite
so rugged as it was in northern Europe. We know that the microliths
made their appearance in western Asia at least by 10,000 B.C. and
possibly earlier, marking the beginning of the terminal phase of
food-collecting. Then, gradually, we begin to see the build-up towards
the first _basic change_ in human life.
This change amounted to a revolution just as important as the
Industrial Revolution. In it, men first learned to domesticate
plants and animals. They began _producing_ their food instead of
simply gathering or collecting it. When their food-production
became reasonably effective, people could and did settle down in
village-farming communities. With the appearance of the little farming
villages, a new way of life was actually under way. Professor Childe
has good reason to speak of the food-producing revolution, for it was
indeed a revolution.
QUESTIONS ABOUT CAUSE
We do not yet know _how_ and _why_ this great revolution took place. We
are only just beginning to put the questions properly. I suspect the
answers will concern some delicate and subtle interplay between man and
nature. Clearly, both the level of culture and the natural condition of
the environment must have been ready for the great change, before the
change itself could come about.
It is going to take years of co-operative field work by both
archeologists and the natural scientists who are most helpful to them
before the _how_ and _why_ answers begin to appear. Anthropologically
trained archeologists are fascinated with the cultures of men in times
of great change. About ten or twelve thousand years ago, the general
level of culture in many parts of the world seems to have been ready
for change. In northwestern Europe, we saw that cultures changed
just enough so that they would not have to change. We linked this to
environmental changes with the coming of post-glacial times.
In western Asia, we archeologists can prove that the food-producing
revolution actually took place. We can see _the_ important consequence
of effective domestication of plants and animals in the appearance of
the settled village-farming community. And within the village-farming
community was the seed of civilization. The way in which effective
domestication of plants and animals came about, however, must also be
linked closely with the natural environment. Thus the archeologists
will not solve the _how_ and _why_ questions alone--they will need the
help of interested natural scientists in the field itself.
PRECONDITIONS FOR THE REVOLUTION
Especially at this point in our story, we must remember how culture and
environment go hand in hand. Neither plants nor animals domesticate
themselves; men domesticate them. Furthermore, men usually domesticate
only those plants and animals which are useful. There is a good
question here: What is cultural usefulness? But I shall side-step it to
save time. Men cannot domesticate plants and animals that do not exist
in the environment where the men live. Also, there are certainly some
animals and probably some plants that resist domestication, although
they might be useful.
This brings me back again to the point that _both_ the level of culture
and the natural condition of the environment--with the proper plants
and animals in it--must have been ready before domestication could
have happened. But this is precondition, not cause. Why did effective
food-production happen first in the Near East? Why did it happen
independently in the New World slightly later? Why also in the Far
East? Why did it happen at all? Why are all human beings not still
living as the Maglemosians did? These are the questions we still have
to face.
CULTURAL RECEPTIVENESS AND PROMISING ENVIRONMENTS
Until the archeologists and the natural scientists--botanists,
geologists, zoologists, and general ecologists--have spent many more
years on the problem, we shall not have full _how_ and _why_ answers. I
do think, however, that we are beginning to understand what to look for.
We shall have to learn much more of what makes the cultures of men
receptive and experimental. Did change in the environment alone
force it? Was it simply a case of Professor Toynbees challenge and
response? I cannot believe the answer is quite that simple. Were it
so simple, we should want to know why the change hadnt come earlier,
along with earlier environmental changes. We shall not know the answer,
however, until we have excavated the traces of many more cultures of
the time in question. We shall doubtless also have to learn more about,
and think imaginatively about, the simpler cultures still left today.
The mechanics of culture in general will be bound to interest us.
It will also be necessary to learn much more of the environments of
10,000 to 12,000 years ago. In which regions of the world were the
natural conditions most promising? Did this promise include plants and
animals which could be domesticated, or did it only offer new ways of
food-collecting? There is much work to do on this problem, but we are
beginning to get some general hints.
Before I begin to detail the hints we now have from western Asia, I
want to do two things. First, I shall tell you of an old theory as to
how food-production might have appeared. Second, I will bother you with
some definitions which should help us in our thinking as the story goes
on.
AN OLD THEORY AS TO THE CAUSE OF THE REVOLUTION
The idea that change would result, if the balance between nature
and culture became upset, is of course not a new one. For at least
twenty-five years, there has been a general theory as to _how_ the
food-producing revolution happened. This theory depends directly on the
idea of natural change in the environment.
The five thousand years following about 10,000 B.C. must have been
very difficult ones, the theory begins. These were the years when
the most marked melting of the last glaciers was going on. While the
glaciers were in place, the climate to the south of them must have been
different from the climate in those areas today. You have no doubt read
that people once lived in regions now covered by the Sahara Desert.
This is true; just when is not entirely clear. The theory is that
during the time of the glaciers, there was a broad belt of rain winds
south of the glaciers. These rain winds would have kept north Africa,
the Nile Valley, and the Middle East green and fertile. But when the
glaciers melted back to the north, the belt of rain winds is supposed
to have moved north too. Then the people living south and east of the
Mediterranean would have found that their water supply was drying up,
that the animals they hunted were dying or moving away, and that the
plant foods they collected were dried up and scarce.
According to the theory, all this would have been true except in the
valleys of rivers and in oases in the growing deserts. Here, in the
only places where water was left, the men and animals and plants would
have clustered. They would have been forced to live close to one
another, in order to live at all. Presently the men would have seen
that some animals were more useful or made better food than others,
and so they would have begun to protect these animals from their
natural enemies. The men would also have been forced to try new plant
foods--foods which possibly had to be prepared before they could be
eaten. Thus, with trials and errors, but by being forced to live close
to plants and animals, men would have learned to domesticate them.
THE OLD THEORY TOO SIMPLE FOR THE FACTS
This theory was set up before we really knew anything in detail about
the later prehistory of the Near and Middle East. We now know that
the facts which have been found dont fit the old theory at all well.
Also, I have yet to find an American meteorologist who feels that we
know enough about the changes in the weather pattern to say that it can
have been so simple and direct. And, of course, the glacial ice which
began melting after 12,000 years ago was merely the last sub-phase of
the last great glaciation. There had also been three earlier periods
of great alpine glaciers, and long periods of warm weather in between.
If the rain belt moved north as the glaciers melted for the last time,
it must have moved in the same direction in earlier times. Thus, the
forced neighborliness of men, plants, and animals in river valleys and
oases must also have happened earlier. Why didnt domestication happen
earlier, then?
Furthermore, it does not seem to be in the oases and river valleys
that we have our first or only traces of either food-production
or the earliest farming villages. These traces are also in the
hill-flanks of the mountains of western Asia. Our earliest sites of the
village-farmers do not seem to indicate a greatly different climate
from that which the same region now shows. In fact, everything we now
know suggests that the old theory was just too simple an explanation to
have been the true one. The only reason I mention it--beyond correcting
the ideas you may get in the general texts--is that it illustrates the
kind of thinking we shall have to do, even if it is doubtless wrong in
detail.
We archeologists shall have to depend much more than we ever have on
the natural scientists who can really help us. I can tell you this from
experience. I had the great good fortune to have on my expedition staff
in Iraq in 1954-55, a geologist, a botanist, and a zoologist. Their
studies added whole new bands of color to my spectrum of thinking about
_how_ and _why_ the revolution took place and how the village-farming
community began. But it was only a beginning; as I said earlier, we are
just now learning to ask the proper questions.
ABOUT STAGES AND ERAS
Now come some definitions, so I may describe my material more easily.
Archeologists have always loved to make divisions and subdivisions
within the long range of materials which they have found. They often
disagree violently about which particular assemblage of material
goes into which subdivision, about what the subdivisions should be
named, about what the subdivisions really mean culturally. Some
archeologists, probably through habit, favor an old scheme of Grecized
names for the subdivisions: paleolithic, mesolithic, neolithic. I
refuse to use these words myself. They have meant too many different
things to too many different people and have tended to hide some pretty
fuzzy thinking. Probably you havent even noticed my own scheme of
subdivision up to now, but Id better tell you in general what it is.
I think of the earliest great group of archeological materials, from
which we can deduce only a food-gathering way of culture, as the
_food-gathering stage_. I say stage rather than age, because it
is not quite over yet; there are still a few primitive people in
out-of-the-way parts of the world who remain in the _food-gathering
stage_. In fact, Professor Julian Steward would probably prefer to call
it a food-gathering _level_ of existence, rather than a stage. This
would be perfectly acceptable to me. I also tend to find myself using
_collecting_, rather than _gathering_, for the more recent aspects or
era of the stage, as the word collecting appears to have more sense
of purposefulness and specialization than does gathering (see p.
91).
Now, while I think we could make several possible subdivisions of the
food-gathering stage--I call my subdivisions of stages _eras_[5]--I
believe the only one which means much to us here is the last or
_terminal sub-era of food-collecting_ of the whole food-gathering
stage. The microliths seem to mark its approach in the northwestern
part of the Old World. It is really shown best in the Old World by
the materials of the Forest folk, the cultural adaptation to the
post-glacial environment in northwestern Europe. We talked about
the Forest folk at the beginning of this chapter, and I used the
Maglemosian assemblage of Denmark as an example.
[5] It is difficult to find words which have a sequence or gradation
of meaning with respect to both development and a range of time
in the past, or with a range of time from somewhere in the past
which is perhaps not yet ended. One standard Webster definition
of _stage_ is: One of the steps into which the material
development of man ... is divided. I cannot find any dictionary
definition that suggests which of the words, _stage_ or _era_,
has the meaning of a longer span of time. Therefore, I have
chosen to let my eras be shorter, and to subdivide my stages
into eras. Webster gives _era_ as: A signal stage of history,
an epoch. When I want to subdivide my eras, I find myself using
_sub-eras_. Thus I speak of the _eras_ within a _stage_ and of
the _sub-eras_ within an _era_; that is, I do so when I feel
that I really have to, and when the evidence is clear enough to
allow it.
The food-producing revolution ushers in the _food-producing stage_.
This stage began to be replaced by the _industrial stage_ only about
two hundred years ago. Now notice that my stage divisions are in terms
of technology and economics. We must think sharply to be sure that the
subdivisions of the stages, the eras, are in the same terms. This does
not mean that I think technology and economics are the only important
realms of culture. It is rather that for most of prehistoric time the
materials left to the archeologists tend to limit our deductions to
technology and economics.
Im so soon out of my competence, as conventional ancient history
begins, that I shall only suggest the earlier eras of the
food-producing stage to you. This book is about prehistory, and Im not
a universal historian.
THE TWO EARLIEST ERAS OF THE FOOD-PRODUCING STAGE
The food-producing stage seems to appear in western Asia with really
revolutionary suddenness. It is seen by the relative speed with which
the traces of new crafts appear in the earliest village-farming
community sites weve dug. It is seen by the spread and multiplication
of these sites themselves, and the remarkable growth in human
population we deduce from this increase in sites. Well look at some
of these sites and the archeological traces they yield in the next
chapter. When such village sites begin to appear, I believe we are in
the _era of the primary village-farming community_. I also believe this
is the second era of the food-producing stage.
The first era of the food-producing stage, I believe, was an _era of
incipient cultivation and animal domestication_. I keep saying I
believe because the actual evidence for this earlier era is so slight
that one has to set it up mainly by playing a hunch for it. The reason
for playing the hunch goes about as follows.
One thing we seem to be able to see, in the food-collecting era in
general, is a tendency for people to begin to settle down. This
settling down seemed to become further intensified in the terminal
era. How this is connected with Professor Mathiassens receptiveness
and the tendency to be experimental, we do not exactly know. The
evidence from the New World comes into play here as well as that from
the Old World. With this settling down in one place, the people of the
terminal era--especially the Forest folk whom we know best--began
making a great variety of new things. I remarked about this earlier in
the chapter. Dr. Robert M. Adams is of the opinion that this atmosphere
of experimentation with new tools--with new ways of collecting food--is
the kind of atmosphere in which one might expect trials at planting
and at animal domestication to have been made. We first begin to find
traces of more permanent life in outdoor camp sites, although caves
were still inhabited at the beginning of the terminal era. It is not
surprising at all that the Forest folk had already domesticated the
dog. In this sense, the whole era of food-collecting was becoming ready
and almost incipient for cultivation and animal domestication.
Northwestern Europe was not the place for really effective beginnings
in agriculture and animal domestication. These would have had to take
place in one of those natural environments of promise, where a variety
of plants and animals, each possible of domestication, was available in
the wild state. Let me spell this out. Really effective food-production
must include a variety of items to make up a reasonably well-rounded
diet. The food-supply so produced must be trustworthy, even though
the food-producing peoples themselves might be happy to supplement
it with fish and wild strawberries, just as we do when such things
are available. So, as we said earlier, part of our problem is that
of finding a region with a natural environment which includes--and
did include, some ten thousand years ago--a variety of possibly
domesticable wild plants and animals.
NUCLEAR AREAS
Now comes the last of my definitions. A region with a natural
environment which included a variety of wild plants and animals,
both possible and ready for domestication, would be a central
or core or _nuclear area_, that is, it would be when and _if_
food-production took place within it. It is pretty hard for me to
imagine food-production having ever made an independent start outside
such a nuclear area, although there may be some possible nuclear areas
in which food-production never took place (possibly in parts of Africa,
for example).
We know of several such nuclear areas. In the New World, Middle America
and the Andean highlands make up one or two; it is my understanding
that the evidence is not yet clear as to which. There seems to have
been a nuclear area somewhere in southeastern Asia, in the Malay
peninsula or Burma perhaps, connected with the early cultivation of
taro, breadfruit, the banana and the mango. Possibly the cultivation
of rice and the domestication of the chicken and of zebu cattle and
the water buffalo belong to this southeast Asiatic nuclear area. We
know relatively little about it archeologically, as yet. The nuclear
area which was the scene of the earliest experiment in effective
food-production was in western Asia. Since I know it best, I shall use
it as my example.
THE NUCLEAR NEAR EAST
The nuclear area of western Asia is naturally the one of greatest
interest to people of the western cultural tradition. Our cultural
heritage began within it. The area itself is the region of the hilly
flanks of rain-watered grass-land which build up to the high mountain
ridges of Iran, Iraq, Turkey, Syria, and Palestine. The map on page
125 indicates the region. If you have a good atlas, try to locate the
zone which surrounds the drainage basin of the Tigris and Euphrates
Rivers at elevations of from approximately 2,000 to 5,000 feet. The
lower alluvial land of the Tigris-Euphrates basin itself has very
little rainfall. Some years ago Professor James Henry Breasted called
the alluvial lands of the Tigris-Euphrates a part of the fertile
crescent. These alluvial lands are very fertile if irrigated. Breasted
was most interested in the oriental civilizations of conventional
ancient history, and irrigation had been discovered before they
appeared.
The country of hilly flanks above Breasteds crescent receives from
10 to 20 or more inches of winter rainfall each year, which is about
what Kansas has. Above the hilly-flanks zone tower the peaks and ridges
of the Lebanon-Amanus chain bordering the coast-line from Palestine
to Turkey, the Taurus Mountains of southern Turkey, and the Zagros
range of the Iraq-Iran borderland. This rugged mountain frame for our
hilly-flanks zone rises to some magnificent alpine scenery, with peaks
of from ten to fifteen thousand feet in elevation. There are several
gaps in the Mediterranean coastal portion of the frame, through which
the winters rain-bearing winds from the sea may break so as to carry
rain to the foothills of the Taurus and the Zagros.
The picture I hope you will have from this description is that of an
intermediate hilly-flanks zone lying between two regions of extremes.
The lower Tigris-Euphrates basin land is low and far too dry and hot
for agriculture based on rainfall alone; to the south and southwest, it
merges directly into the great desert of Arabia. The mountains which
lie above the hilly-flanks zone are much too high and rugged to have
encouraged farmers.
THE NATURAL ENVIRONMENT OF THE NUCLEAR NEAR EAST
The more we learn of this hilly-flanks zone that I describe, the
more it seems surely to have been a nuclear area. This is where we
archeologists need, and are beginning to get, the help of natural
scientists. They are coming to the conclusion that the natural
environment of the hilly-flanks zone today is much as it was some eight
to ten thousand years ago. There are still two kinds of wild wheat and
a wild barley, and the wild sheep, goat, and pig. We have discovered
traces of each of these at about nine thousand years ago, also traces
of wild ox, horse, and dog, each of which appears to be the probable
ancestor of the domesticated form. In fact, at about nine thousand
years ago, the two wheats, the barley, and at least the goat, were
already well on the road to domestication.
The wild wheats give us an interesting clue. They are only available
together with the wild barley within the hilly-flanks zone. While the
wild barley grows in a variety of elevations and beyond the zone,
at least one of the wild wheats does not seem to grow below the hill
country. As things look at the moment, the domestication of both the
wheats together could _only_ have taken place within the hilly-flanks
zone. Barley seems to have first come into cultivation due to its
presence as a weed in already cultivated wheat fields. There is also
a suggestion--there is still much more to learn in the matter--that
the animals which were first domesticated were most at home up in the
hilly-flanks zone in their wild state.
With a single exception--that of the dog--the earliest positive
evidence of domestication includes the two forms of wheat, the barley,
and the goat. The evidence comes from within the hilly-flanks zone.
However, it comes from a settled village proper, Jarmo (which Ill
describe in the next chapter), and is thus from the era of the primary
village-farming community. We are still without positive evidence of
domesticated grain and animals in the first era of the food-producing
stage, that of incipient cultivation and animal domestication.
THE ERA OF INCIPIENT CULTIVATION AND ANIMAL DOMESTICATION
I said above (p. 105) that my era of incipient cultivation and animal
domestication is mainly set up by playing a hunch. Although we cannot
really demonstrate it--and certainly not in the Near East--it would
be very strange for food-collectors not to have known a great deal
about the plants and animals most useful to them. They do seem to have
domesticated the dog. We can easily imagine them remembering to go
back, season after season, to a particular patch of ground where seeds
or acorns or berries grew particularly well. Most human beings, unless
they are extremely hungry, are attracted to baby animals, and many wild
pups or fawns or piglets must have been brought back alive by hunting
parties.
In this last sense, man has probably always been an incipient
cultivator and domesticator. But I believe that Adams is right in
suggesting that this would be doubly true with the experimenters of
the terminal era of food-collecting. We noticed that they also seem
to have had a tendency to settle down. Now my hunch goes that _when_
this experimentation and settling down took place within a potential
nuclear area--where a whole constellation of plants and animals
possible of domestication was available--the change was easily made.
Professor Charles A. Reed, our field colleague in zoology, agrees that
year-round settlement with plant domestication probably came before
there were important animal domestications.
INCIPIENT ERAS AND NUCLEAR AREAS
I have put this scheme into a simple chart (p. 111) with the names
of a few of the sites we are going to talk about. You will see that my
hunch means that there are eras of incipient cultivation _only_ within
nuclear areas. In a nuclear area, the terminal era of food-collecting
would probably have been quite short. I do not know for how long a time
the era of incipient cultivation and domestication would have lasted,
but perhaps for several thousand years. Then it passed on into the era
of the primary village-farming community.
Outside a nuclear area, the terminal era of food-collecting would last
for a long time; in a few out-of-the-way parts of the world, it still
hangs on. It would end in any particular place through contact with
and the spread of ideas of people who had passed on into one of the
more developed eras. In many cases, the terminal era of food-collecting
was ended by the incoming of the food-producing peoples themselves.
For example, the practices of food-production were carried into Europe
by the actual movement of some numbers of peoples (we dont know how
many) who had reached at least the level of the primary village-farming
community. The Forest folk learned food-production from them. There
was never an era of incipient cultivation and domestication proper in
Europe, if my hunch is right.
ARCHEOLOGICAL DIFFICULTIES IN SEEING THE INCIPIENT ERA
The way I see it, two things were required in order that an era of
incipient cultivation and domestication could begin. First, there had
to be the natural environment of a nuclear area, with its whole group
of plants and animals capable of domestication. This is the aspect of
the matter which weve said is directly given by nature. But it is
quite possible that such an environment with such a group of plants
and animals in it may have existed well before ten thousand years ago
in the Near East. It is also quite possible that the same promising
condition may have existed in regions which never developed into
nuclear areas proper. Here, again, we come back to the cultural factor.
I think it was that atmosphere of experimentation weve talked about
once or twice before. I cant define it for you, other than to say that
by the end of the Ice Age, the general level of many cultures was ready
for change. Ask me how and why this was so, and Ill tell you we dont
know yet, and that if we did understand this kind of question, there
would be no need for me to go on being a prehistorian!
[Illustration: POSSIBLE RELATIONSHIPS OF STAGES AND ERAS IN WESTERN
ASIA AND NORTHEASTERN AFRICA]
Now since this was an era of incipience, of the birth of new ideas,
and of experimentation, it is very difficult to see its traces
archeologically. New tools having to do with the new ways of getting
and, in fact, producing food would have taken some time to develop.
It need not surprise us too much if we cannot find hoes for planting
and sickles for reaping grain at the very beginning. We might expect
a time of making-do with some of the older tools, or with make-shift
tools, for some of the new jobs. The present-day wild cousin of the
domesticated sheep still lives in the mountains of western Asia. It has
no wool, only a fine down under hair like that of a deer, so it need
not surprise us to find neither the whorls used for spinning nor traces
of woolen cloth. It must have taken some time for a wool-bearing sheep
to develop and also time for the invention of the new tools which go
with weaving. It would have been the same with other kinds of tools for
the new way of life.
It is difficult even for an experienced comparative zoologist to tell
which are the bones of domesticated animals and which are those of
their wild cousins. This is especially so because the animal bones the
archeologists find are usually fragmentary. Furthermore, we do not have
a sort of library collection of the skeletons of the animals or an
herbarium of the plants of those times, against which the traces which
the archeologists find may be checked. We are only beginning to get
such collections for the modern wild forms of animals and plants from
some of our nuclear areas. In the nuclear area in the Near East, some
of the wild animals, at least, have already become extinct. There are
no longer wild cattle or wild horses in western Asia. We know they were
there from the finds weve made in caves of late Ice Age times, and
from some slightly later sites.
SITES WITH ANTIQUITIES OF THE INCIPIENT ERA
So far, we know only a very few sites which would suit my notion of the
incipient era of cultivation and animal domestication. I am closing
this chapter with descriptions of two of the best Near Eastern examples
I know of. You may not be satisfied that what I am able to describe
makes a full-bodied era of development at all. Remember, however, that
Ive told you Im largely playing a kind of a hunch, and also that the
archeological materials of this era will always be extremely difficult
to interpret. At the beginning of any new way of life, there will be a
great tendency for people to make-do, at first, with tools and habits
they are already used to. I would suspect that a great deal of this
making-do went on almost to the end of this era.
THE NATUFIAN, AN ASSEMBLAGE OF THE INCIPIENT ERA
The assemblage called the Natufian comes from the upper layers of a
number of caves in Palestine. Traces of its flint industry have also
turned up in Syria and Lebanon. We dont know just how old it is. I
guess that it probably falls within five hundred years either way of
about 5000 B.C.
Until recently, the people who produced the Natufian assemblage were
thought to have been only cave dwellers, but now at least three open
air Natufian sites have been briefly described. In their best-known
dwelling place, on Mount Carmel, the Natufian folk lived in the open
mouth of a large rock-shelter and on the terrace in front of it. On the
terrace, they had set at least two short curving lines of stones; but
these were hardly architecture; they seem more like benches or perhaps
the low walls of open pens. There were also one or two small clusters
of stones laid like paving, and a ring of stones around a hearth or
fireplace. One very round and regular basin-shaped depression had been
cut into the rocky floor of the terrace, and there were other less
regular basin-like depressions. In the newly reported open air sites,
there seem to have been huts with rounded corners.
Most of the finds in the Natufian layer of the Mount Carmel cave were
flints. About 80 per cent of these flint tools were microliths made
by the regular working of tiny blades into various tools, some having
geometric forms. The larger flint tools included backed blades, burins,
scrapers, a few arrow points, some larger hacking or picking tools, and
one special type. This last was the sickle blade.
We know a sickle blade of flint when we see one, because of a strange
polish or sheen which seems to develop on the cutting edge when the
blade has been used to cut grasses or grain, or--perhaps--reeds. In
the Natufian, we have even found the straight bone handles in which a
number of flint sickle blades were set in a line.
There was a small industry in ground or pecked stone (that is, abraded
not chipped) in the Natufian. This included some pestle and mortar
fragments. The mortars are said to have a deep and narrow hole,
and some of the pestles show traces of red ochre. We are not sure
that these mortars and pestles were also used for grinding food. In
addition, there were one or two bits of carving in stone.
NATUFIAN ANTIQUITIES IN OTHER MATERIALS; BURIALS AND PEOPLE
The Natufian industry in bone was quite rich. It included, beside the
sickle hafts mentioned above, points and harpoons, straight and curved
types of fish-hooks, awls, pins and needles, and a variety of beads and
pendants. There were also beads and pendants of pierced teeth and shell.
A number of Natufian burials have been found in the caves; some burials
were grouped together in one grave. The people who were buried within
the Mount Carmel cave were laid on their backs in an extended position,
while those on the terrace seem to have been flexed (placed in their
graves in a curled-up position). This may mean no more than that it was
easier to dig a long hole in cave dirt than in the hard-packed dirt of
the terrace. The people often had some kind of object buried with them,
and several of the best collections of beads come from the burials. On
two of the skulls there were traces of elaborate head-dresses of shell
beads.
[Illustration: SKETCH OF NATUFIAN ASSEMBLAGE
MICROLITHS
ARCHITECTURE?
BURIAL
CHIPPED STONE
GROUND STONE
BONE]
The animal bones of the Natufian layers show beasts of a modern type,
but with some differences from those of present-day Palestine. The
bones of the gazelle far outnumber those of the deer; since gazelles
like a much drier climate than deer, Palestine must then have had much
the same climate that it has today. Some of the animal bones were those
of large or dangerous beasts: the hyena, the bear, the wild boar,
and the leopard. But the Natufian people may have had the help of a
large domesticated dog. If our guess at a date for the Natufian is
right (about 7750 B.C.), this is an earlier dog than was that in the
Maglemosian of northern Europe. More recently, it has been reported
that a domesticated goat is also part of the Natufian finds.
The study of the human bones from the Natufian burials is not yet
complete. Until Professor McCowns study becomes available, we may note
Professor Coons assessment that these people were of a basically
Mediterranean type.
THE KARIM SHAHIR ASSEMBLAGE
Karim Shahir differs from the Natufian sites in that it shows traces
of a temporary open site or encampment. It lies on the top of a bluff
in the Kurdish hill-country of northeastern Iraq. It was dug by Dr.
Bruce Howe of the expedition I directed in 1950-51 for the Oriental
Institute and the American Schools of Oriental Research. In 1954-55,
our expedition located another site, Mlefaat, with general resemblance
to Karim Shahir, but about a hundred miles north of it. In 1956, Dr.
Ralph Solecki located still another Karim Shahir type of site called
Zawi Chemi Shanidar. The Zawi Chemi site has a radiocarbon date of 8900
300 B.C.
Karim Shahir has evidence of only one very shallow level of occupation.
It was probably not lived on very long, although the people who lived
on it spread out over about three acres of area. In spots, the single
layer yielded great numbers of fist-sized cracked pieces of limestone,
which had been carried up from the bed of a stream at the bottom of the
bluff. We think these cracked stones had something to do with a kind of
architecture, but we were unable to find positive traces of hut plans.
At Mlefaat and Zawi Chemi, there were traces of rounded hut plans.
As in the Natufian, the great bulk of small objects of the Karim Shahir
assemblage was in chipped flint. A large proportion of the flint tools
were microlithic bladelets and geometric forms. The flint sickle blade
was almost non-existent, being far scarcer than in the Natufian. The
people of Karim Shahir did a modest amount of work in the grinding of
stone; there were milling stone fragments of both the mortar and the
quern type, and stone hoes or axes with polished bits. Beads, pendants,
rings, and bracelets were made of finer quality stone. We found a few
simple points and needles of bone, and even two rather formless unbaked
clay figurines which seemed to be of animal form.
[Illustration: SKETCH OF KARIM SHAHIR ASSEMBLAGE
CHIPPED STONE
GROUND STONE
UNBAKED CLAY
SHELL
BONE
ARCHITECTURE]
Karim Shahir did not yield direct evidence of the kind of vegetable
food its people ate. The animal bones showed a considerable
increase in the proportion of the bones of the species capable of
domestication--sheep, goat, cattle, horse, dog--as compared with animal
bones from the earlier cave sites of the area, which have a high
proportion of bones of wild forms like deer and gazelle. But we do not
know that any of the Karim Shahir animals were actually domesticated.
Some of them may have been, in an incipient way, but we have no means
at the moment that will tell us from the bones alone.
WERE THE NATUFIAN AND KARIM SHAHIR PEOPLES FOOD-PRODUCERS?
It is clear that a great part of the food of the Natufian people
must have been hunted or collected. Shells of land, fresh-water, and
sea animals occur in their cave layers. The same is true as regards
Karim Shahir, save for sea shells. But on the other hand, we have
the sickles, the milling stones, the possible Natufian dog, and the
goat, and the general animal situation at Karim Shahir to hint at an
incipient approach to food-production. At Karim Shahir, there was the
tendency to settle down out in the open; this is echoed by the new
reports of open air Natufian sites. The large number of cracked stones
certainly indicates that it was worth the peoples while to have some
kind of structure, even if the site as a whole was short-lived.
It is a part of my hunch that these things all point toward
food-production--that the hints we seek are there. But in the sense
that the peoples of the era of the primary village-farming community,
which we shall look at next, are fully food-producing, the Natufian
and Karim Shahir folk had not yet arrived. I think they were part of
a general build-up to full scale food-production. They were possibly
controlling a few animals of several kinds and perhaps one or two
plants, without realizing the full possibilities of this control as a
new way of life.
This is why I think of the Karim Shahir and Natufian folk as being at
a level, or in an era, of incipient cultivation and domestication. But
we shall have to do a great deal more excavation in this range of time
before well get the kind of positive information we need.
SUMMARY
I am sorry that this chapter has had to be so much more about ideas
than about the archeological traces of prehistoric men themselves.
But the antiquities of the incipient era of cultivation and animal
domestication will not be spectacular, even when we do have them
excavated in quantity. Few museums will be interested in these
antiquities for exhibition purposes. The charred bits or impressions
of plants, the fragments of animal bone and shell, and the varied
clues to climate and environment will be as important as the artifacts
themselves. It will be the ideas to which these traces lead us that
will be important. I am sure that this unspectacular material--when we
have much more of it, and learn how to understand what it says--will
lead us to how and why answers about the first great change in human
history.
We know the earliest village-farming communities appeared in western
Asia, in a nuclear area. We do not yet know why the Near Eastern
experiment came first, or why it didnt happen earlier in some other
nuclear area. Apparently, the level of culture and the promise of the
natural environment were ready first in western Asia. The next sites
we look at will show a simple but effective food-production already
in existence. Without effective food-production and the settled
village-farming communities, civilization never could have followed.
How effective food-production came into being by the end of the
incipient era, is, I believe, one of the most fascinating questions any
archeologist could face.
It now seems probable--from possibly two of the Palestinian sites with
varieties of the Natufian (Jericho and Nahal Oren)--that there were
one or more local Palestinian developments out of the Natufian into
later times. In the same way, what followed after the Karim Shahir type
of assemblage in northeastern Iraq was in some ways a reflection of
beginnings made at Karim Shahir and Zawi Chemi.
THE First Revolution
[Illustration]
As the incipient era of cultivation and animal domestication passed
onward into the era of the primary village-farming community, the first
basic change in human economy was fully achieved. In southwestern Asia,
this seems to have taken place about nine thousand years ago. I am
going to restrict my description to this earliest Near Eastern case--I
do not know enough about the later comparable experiments in the Far
East and in the New World. Let us first, once again, think of the
contrast between food-collecting and food-producing as ways of life.
THE DIFFERENCE BETWEEN FOOD-COLLECTORS AND FOOD-PRODUCERS
Childe used the word revolution because of the radical change that
took place in the habits and customs of man. Food-collectors--that is,
hunters, fishers, berry- and nut-gatherers--had to live in small groups
or bands, for they had to be ready to move wherever their food supply
moved. Not many people can be fed in this way in one area, and small
children and old folks are a burden. There is not enough food to store,
and it is not the kind that can be stored for long.
Do you see how this all fits into a picture? Small groups of people
living now in this cave, now in that--or out in the open--as they moved
after the animals they hunted; no permanent villages, a few half-buried
huts at best; no breakable utensils; no pottery; no signs of anything
for clothing beyond the tools that were probably used to dress the
skins of animals; no time to think of much of anything but food and
protection and disposal of the dead when death did come: an existence
which takes nature as it finds it, which does little or nothing to
modify nature--all in all, a savages existence, and a very tough one.
A man who spends his whole life following animals just to kill them to
eat, or moving from one berry patch to another, is really living just
like an animal himself.
THE FOOD-PRODUCING ECONOMY
Against this picture let me try to draw another--that of mans life
after food-production had begun. His meat was stored on the hoof,
his grain in silos or great pottery jars. He lived in a house: it was
worth his while to build one, because he couldnt move far from his
fields and flocks. In his neighborhood enough food could be grown
and enough animals bred so that many people were kept busy. They all
lived close to their flocks and fields, in a village. The village was
already of a fair size, and it was growing, too. Everybody had more to
eat; they were presumably all stronger, and there were more children.
Children and old men could shepherd the animals by day or help with
the lighter work in the fields. After the crops had been harvested the
younger men might go hunting and some of them would fish, but the food
they brought in was only an addition to the food in the village; the
villagers wouldnt starve, even if the hunters and fishermen came home
empty-handed.
There was more time to do different things, too. They began to modify
nature. They made pottery out of raw clay, and textiles out of hair
or fiber. People who became good at pottery-making traded their pots
for food and spent all of their time on pottery alone. Other people
were learning to weave cloth or to make new tools. There were already
people in the village who were becoming full-time craftsmen.
Other things were changing, too. The villagers must have had
to agree on new rules for living together. The head man of the
village had problems different from those of the chief of the small
food-collectors band. If somebodys flock of sheep spoiled a wheat
field, the owner wanted payment for the grain he lost. The chief of
the hunters was never bothered with such questions. Even the gods
had changed. The spirits and the magic that had been used by hunters
werent of any use to the villagers. They needed gods who would watch
over the fields and the flocks, and they eventually began to erect
buildings where their gods might dwell, and where the men who knew most
about the gods might live.
WAS FOOD-PRODUCTION A REVOLUTION?
If you can see the difference between these two pictures--between
life in the food-collecting stage and life after food-production
had begun--youll see why Professor Childe speaks of a revolution.
By revolution, he doesnt mean that it happened over night or that
it happened only once. We dont know exactly how long it took. Some
people think that all these changes may have occurred in less than
500 years, but I doubt that. The incipient era was probably an affair
of some duration. Once the level of the village-farming community had
been established, however, things did begin to move very fast. By
six thousand years ago, the descendants of the first villagers had
developed irrigation and plow agriculture in the relatively rainless
Mesopotamian alluvium and were living in towns with temples. Relative
to the half million years of food-gathering which lay behind, this had
been achieved with truly revolutionary suddenness.
GAPS IN OUR KNOWLEDGE OF THE NEAR EAST
If youll look again at the chart (p. 111) youll see that I have
very few sites and assemblages to name in the incipient era of
cultivation and domestication, and not many in the earlier part of
the primary village-farming level either. Thanks in no small part
to the intelligent co-operation given foreign excavators by the
Iraq Directorate General of Antiquities, our understanding of the
sequence in Iraq is growing more complete. I shall use Iraq as my main
yard-stick here. But I am far from being able to show you a series of
Sears Roebuck catalogues, even century by century, for any part of
the nuclear area. There is still a great deal of earth to move, and a
great mass of material to recover and interpret before we even begin to
understand how and why.
Perhaps here, because this kind of archeology is really my specialty,
youll excuse it if I become personal for a moment. I very much look
forward to having further part in closing some of the gaps in knowledge
of the Near East. This is not, as Ive told you, the spectacular
range of Near Eastern archeology. There are no royal tombs, no gold,
no great buildings or sculpture, no writing, in fact nothing to
excite the normal museum at all. Nevertheless it is a range which,
idea-wise, gives the archeologist tremendous satisfaction. The country
of the hilly flanks is an exciting combination of green grasslands
and mountainous ridges. The Kurds, who inhabit the part of the area
in which Ive worked most recently, are an extremely interesting and
hospitable people. Archeologists dont become rich, but Ill forego
the Cadillac for any bright spring morning in the Kurdish hills, on a
good site with a happy crew of workmen and an interested and efficient
staff. It is probably impossible to convey the full feeling which life
on such a dig holds--halcyon days for the body and acute pleasurable
stimulation for the mind. Old things coming newly out of the good dirt,
and the pieces of the human puzzle fitting into place! I think I am
an honest man; I cannot tell you that I am sorry the job is not yet
finished and that there are still gaps in this part of the Near Eastern
archeological sequence.
EARLIEST SITES OF THE VILLAGE FARMERS
So far, the Karim Shahir type of assemblage, which we looked at in the
last chapter, is the earliest material available in what I take to
be the nuclear area. We do not believe that Karim Shahir was a village
site proper: it looks more like the traces of a temporary encampment.
Two caves, called Belt and Hotu, which are outside the nuclear area
and down on the foreshore of the Caspian Sea, have been excavated
by Professor Coon. These probably belong in the later extension of
the terminal era of food-gathering; in their upper layers are traits
like the use of pottery borrowed from the more developed era of the
same time in the nuclear area. The same general explanation doubtless
holds true for certain materials in Egypt, along the upper Nile and in
the Kharga oasis: these materials, called Sebilian III, the Khartoum
neolithic, and the Khargan microlithic, are from surface sites,
not from caves. The chart (p. 111) shows where I would place these
materials in era and time.
[Illustration: THE HILLY FLANKS OF THE CRESCENT AND EARLY SITES OF THE
NEAR EAST]
Both Mlefaat and Dr. Soleckis Zawi Chemi Shanidar site appear to have
been slightly more settled in than was Karim Shahir itself. But I do
not think they belong to the era of farming-villages proper. The first
site of this era, in the hills of Iraqi Kurdistan, is Jarmo, on which
we have spent three seasons of work. Following Jarmo comes a variety of
sites and assemblages which lie along the hilly flanks of the crescent
and just below it. I am going to describe and illustrate some of these
for you.
Since not very much archeological excavation has yet been done on sites
of this range of time, I shall have to mention the names of certain
single sites which now alone stand for an assemblage. This does not
mean that I think the individual sites I mention were unique. In the
times when their various cultures flourished, there must have been
many little villages which shared the same general assemblage. We are
only now beginning to locate them again. Thus, if I speak of Jarmo,
or Jericho, or Sialk as single examples of their particular kinds of
assemblages, I dont mean that they were unique at all. I think I could
take you to the sites of at least three more Jarmos, within twenty
miles of the original one. They are there, but they simply havent yet
been excavated. In 1956, a Danish expedition discovered material of
Jarmo type at Shimshara, only two dozen miles northeast of Jarmo, and
below an assemblage of Hassunan type (which I shall describe presently).
THE GAP BETWEEN KARIM SHAHIR AND JARMO
As we see the matter now, there is probably still a gap in the
available archeological record between the Karim Shahir-Mlefaat-Zawi
Chemi group (of the incipient era) and that of Jarmo (of the
village-farming era). Although some items of the Jarmo type materials
do reflect the beginnings of traditions set in the Karim Shahir group
(see p. 120), there is not a clear continuity. Moreover--to the
degree that we may trust a few radiocarbon dates--there would appear
to be around two thousand years of difference in time. The single
available Zawi Chemi date is 8900 300 B.C.; the most reasonable
group of dates from Jarmo average to about 6750 200 B.C. I am
uncertain about this two thousand years--I do not think it can have
been so long.
This suggests that we still have much work to do in Iraq. You can
imagine how earnestly we await the return of political stability in the
Republic of Iraq.
JARMO, IN THE KURDISH HILLS, IRAQ
The site of Jarmo has a depth of deposit of about twenty-seven feet,
and approximately a dozen layers of architectural renovation and
change. Nevertheless it is a one period site: its assemblage remains
essentially the same throughout, although one or two new items are
added in later levels. It covers about four acres of the top of a
bluff, below which runs a small stream. Jarmo lies in the hill country
east of the modern oil town of Kirkuk. The Iraq Directorate General of
Antiquities suggested that we look at it in 1948, and we have had three
seasons of digging on it since.
The people of Jarmo grew the barley plant and two different kinds of
wheat. They made flint sickles with which to reap their grain, mortars
or querns on which to crack it, ovens in which it might be parched, and
stone bowls out of which they might eat their porridge. We are sure
that they had the domesticated goat, but Professor Reed (the staff
zoologist) is not convinced that the bones of the other potentially
domesticable animals of Jarmo--sheep, cattle, pig, horse, dog--show
sure signs of domestication. We had first thought that all of these
animals were domesticated ones, but Reed feels he must find out much
more before he can be sure. As well as their grain and the meat from
their animals, the people of Jarmo consumed great quantities of land
snails. Botanically, the Jarmo wheat stands about half way between
fully bred wheat and the wild forms.
ARCHITECTURE: HALL-MARK OF THE VILLAGE
The sure sign of the village proper is in its traces of architectural
permanence. The houses of Jarmo were only the size of a small cottage
by our standards, but each was provided with several rectangular rooms.
The walls of the houses were made of puddled mud, often set on crude
foundations of stone. (The puddled mud wall, which the Arabs call
_touf_, is built by laying a three to six inch course of soft mud,
letting this sun-dry for a day or two, then adding the next course,
etc.) The village probably looked much like the simple Kurdish farming
village of today, with its mud-walled houses and low mud-on-brush
roofs. I doubt that the Jarmo village had more than twenty houses at
any one moment of its existence. Today, an average of about seven
people live in a comparable Kurdish house; probably the population of
Jarmo was about 150 people.
[Illustration: SKETCH OF JARMO ASSEMBLAGE
CHIPPED STONE
UNBAKED CLAY
GROUND STONE
POTTERY _UPPER THIRD OF SITE ONLY._
REED MATTING
BONE
ARCHITECTURE]
It is interesting that portable pottery does not appear until the
last third of the life of the Jarmo village. Throughout the duration
of the village, however, its people had experimented with the plastic
qualities of clay. They modeled little figurines of animals and of
human beings in clay; one type of human figurine they favored was that
of a markedly pregnant woman, probably the expression of some sort of
fertility spirit. They provided their house floors with baked-in-place
depressions, either as basins or hearths, and later with domed ovens of
clay. As weve noted, the houses themselves were of clay or mud; one
could almost say they were built up like a house-sized pot. Then,
finally, the idea of making portable pottery itself appeared, although
I very much doubt that the people of the Jarmo village discovered the
art.
On the other hand, the old tradition of making flint blades and
microlithic tools was still very strong at Jarmo. The sickle-blade was
made in quantities, but so also were many of the much older tool types.
Strangely enough, it is within this age-old category of chipped stone
tools that we see one of the clearest pointers to a newer age. Many of
the Jarmo chipped stone tools--microliths--were made of obsidian, a
black volcanic natural glass. The obsidian beds nearest to Jarmo are
over three hundred miles to the north. Already a bulk carrying trade
had been established--the forerunner of commerce--and the routes were
set by which, in later times, the metal trade was to move.
There are now twelve radioactive carbon dates from Jarmo. The most
reasonable cluster of determinations averages to about 6750 200
B.C., although there is a completely unreasonable range of dates
running from 3250 to 9250 B.C.! _If_ I am right in what I take to be
reasonable, the first flush of the food-producing revolution had been
achieved almost nine thousand years ago.
HASSUNA, IN UPPER MESOPOTAMIAN IRAQ
We are not sure just how soon after Jarmo the next assemblage of Iraqi
material is to be placed. I do not think the time was long, and there
are a few hints that detailed habits in the making of pottery and
ground stone tools were actually continued from Jarmo times into the
time of the next full assemblage. This is called after a site named
Hassuna, a few miles to the south and west of modern Mosul. We also
have Hassunan type materials from several other sites in the same
general region. It is probably too soon to make generalizations about
it, but the Hassunan sites seem to cluster at slightly lower elevations
than those we have been talking about so far.
The catalogue of the Hassuna assemblage is of course more full and
elaborate than that of Jarmo. The Iraqi governments archeologists
who dug Hassuna itself, exposed evidence of increasing architectural
know-how. The walls of houses were still formed of puddled mud;
sun-dried bricks appear only in later periods. There were now several
different ways of making and decorating pottery vessels. One style of
pottery painting, called the Samarran style, is an extremely handsome
one and must have required a great deal of concentration and excellence
of draftsmanship. On the other hand, the old habits for the preparation
of good chipped stone tools--still apparent at Jarmo--seem to have
largely disappeared by Hassunan times. The flint work of the Hassunan
catalogue is, by and large, a wretched affair. We might guess that the
kinaesthetic concentration of the Hassuna craftsmen now went into other
categories; that is, they suddenly discovered they might have more fun
working with the newer materials. Its a shame, for example, that none
of their weaving is preserved for us.
The two available radiocarbon determinations from Hassunan contexts
stand at about 5100 and 5600 B.C. 250 years.
OTHER EARLY VILLAGE SITES IN THE NUCLEAR AREA
Ill now name and very briefly describe a few of the other early
village assemblages either in or adjacent to the hilly flanks of the
crescent. Unfortunately, we do not have radioactive carbon dates for
many of these materials. We may guess that some particular assemblage,
roughly comparable to that of Hassuna, for example, must reflect a
culture which lived at just about the same time as that of Hassuna. We
do this guessing on the basis of the general similarity and degree of
complexity of the Sears Roebuck catalogues of the particular assemblage
and that of Hassuna. We suppose that for sites near at hand and of a
comparable cultural level, as indicated by their generally similar
assemblages, the dating must be about the same. We may also know that
in a general stratigraphic sense, the sites in question may both appear
at the bottom of the ascending village sequence in their respective
areas. Without a number of consistent radioactive carbon dates, we
cannot be precise about priorities.
[Illustration: SKETCH OF HASSUNA ASSEMBLAGE
POTTERY
POTTERY OBJECTS
CHIPPED STONE
BONE
GROUND STONE
ARCHITECTURE
REED MATTING
BURIAL]
The ancient mound at Jericho, in the Dead Sea valley in Palestine,
yields some very interesting material. Its catalogue somewhat resembles
that of Jarmo, especially in the sense that there is a fair depth
of deposit without portable pottery vessels. On the other hand, the
architecture of Jericho is surprisingly complex, with traces of massive
stone fortification walls and the general use of formed sun-dried
mud brick. Jericho lies in a somewhat strange and tropically lush
ecological niche, some seven hundred feet below sea level; it is
geographically within the hilly-flanks zone but environmentally not
part of it.
Several radiocarbon dates for Jericho fall within the range of those
I find reasonable for Jarmo, and their internal statistical consistency
is far better than that for the Jarmo determinations. It is not yet
clear exactly what this means.
The mound at Jericho (Tell es-Sultan) contains a remarkably
fine sequence, which perhaps does not have the gap we noted in
Iraqi-Kurdistan between the Karim Shahir group and Jarmo. While I am
not sure that the Jericho sequence will prove valid for those parts
of Palestine outside the special Dead Sea environmental niche, the
sequence does appear to proceed from the local variety of Natufian into
that of a very well settled community. So far, we have little direct
evidence for the food-production basis upon which the Jericho people
subsisted.
There is an early village assemblage with strong characteristics of its
own in the land bordering the northeast corner of the Mediterranean
Sea, where Syria and the Cilician province of Turkey join. This early
Syro-Cilician assemblage must represent a general cultural pattern
which was at least in part contemporary with that of the Hassuna
assemblage. These materials from the bases of the mounds at Mersin, and
from Judaidah in the Amouq plain, as well as from a few other sites,
represent the remains of true villages. The walls of their houses were
built of puddled mud, but some of the house foundations were of stone.
Several different kinds of pottery were made by the people of these
villages. None of it resembles the pottery from Hassuna or from the
upper levels of Jarmo or Jericho. The Syro-Cilician people had not
lost their touch at working flint. An important southern variation of
the Syro-Cilician assemblage has been cleared recently at Byblos, a
port town famous in later Phoenician times. There are three radiocarbon
determinations which suggest that the time range for these developments
was in the sixth or early fifth millennium B.C.
It would be fascinating to search for traces of even earlier
village-farming communities and for the remains of the incipient
cultivation era, in the Syro-Cilician region.
THE IRANIAN PLATEAU AND THE NILE VALLEY
The map on page 125 shows some sites which lie either outside or in
an extension of the hilly-flanks zone proper. From the base of the
great mound at Sialk on the Iranian plateau came an assemblage of
early village material, generally similar, in the kinds of things it
contained, to the catalogues of Hassuna and Judaidah. The details of
how things were made are different; the Sialk assemblage represents
still another cultural pattern. I suspect it appeared a bit later
in time than did that of Hassuna. There is an important new item in
the Sialk catalogue. The Sialk people made small drills or pins of
hammered copper. Thus the metallurgists specialized craft had made its
appearance.
There is at least one very early Iranian site on the inward slopes
of the hilly-flanks zone. It is the earlier of two mounds at a place
called Bakun, in southwestern Iran; the results of the excavations
there are not yet published and we only know of its coarse and
primitive pottery. I only mention Bakun because it helps us to plot the
extent of the hilly-flanks zone villages on the map.
The Nile Valley lies beyond the peculiar environmental zone of the
hilly flanks of the crescent, and it is probable that the earliest
village-farming communities in Egypt were established by a few people
who wandered into the Nile delta area from the nuclear area. The
assemblage which is most closely comparable to the catalogue of Hassuna
or Judaidah, for example, is that from little settlements along the
shore of the Fayum lake. The Fayum materials come mainly from grain
bins or silos. Another site, Merimde, in the western part of the Nile
delta, shows the remains of a true village, but it may be slightly
later than the settlement of the Fayum. There are radioactive carbon
dates for the Fayum materials at about 4275 B.C. 320 years, which
is almost fifteen hundred years later than the determinations suggested
for the Hassunan or Syro-Cilician assemblages. I suspect that this
is a somewhat over-extended indication of the time it took for the
generalized cultural pattern of village-farming community life to
spread from the nuclear area down into Egypt, but as yet we have no way
of testing these matters.
In this same vein, we have two radioactive carbon dates for an
assemblage from sites near Khartoum in the Sudan, best represented by
the mound called Shaheinab. The Shaheinab catalogue roughly corresponds
to that of the Fayum; the distance between the two places, as the Nile
flows, is roughly 1,500 miles. Thus it took almost a thousand years for
the new way of life to be carried as far south into Africa as Khartoum;
the two Shaheinab dates average about 3300 B.C. 400 years.
If the movement was up the Nile (southward), as these dates suggest,
then I suspect that the earliest available village material of middle
Egypt, the so-called Tasian, is also later than that of the Fayum. The
Tasian materials come from a few graves near a village called Deir
Tasa, and I have an uncomfortable feeling that the Tasian assemblage
may be mainly an artificial selection of poor examples of objects which
belong in the following range of time.
SPREAD IN TIME AND SPACE
There are now two things we can do; in fact, we have already begun to
do them. We can watch the spread of the new way of life upward through
time in the nuclear area. We can also see how the new way of life
spread outward in space from the nuclear area, as time went on. There
is good archeological evidence that both these processes took place.
For the hill country of northeastern Iraq, in the nuclear area, we
have already noticed how the succession (still with gaps) from Karim
Shahir, through Mlefaat and Jarmo, to Hassuna can be charted (see
chart, p. 111). In the next chapter, we shall continue this charting
and description of what happened in Iraq upward through time. We also
watched traces of the new way of life move through space up the Nile
into Africa, to reach Khartoum in the Sudan some thirty-five hundred
years later than we had seen it at Jarmo or Jericho. We caught glimpses
of it in the Fayum and perhaps at Tasa along the way.
For the remainder of this chapter, I shall try to suggest briefly for
you the directions taken by the spread of the new way of life from the
nuclear area in the Near East. First, let me make clear again that
I _do not_ believe that the village-farming community way of life
was invented only once and in the Near East. It seems to me that the
evidence is very clear that a separate experiment arose in the New
World. For China, the question of independence or borrowing--in the
appearance of the village-farming community there--is still an open
one. In the last chapter, we noted the probability of an independent
nuclear area in southeastern Asia. Professor Carl Sauer strongly
champions the great importance of this area as _the_ original center
of agricultural pursuits, as a kind of cradle of all incipient eras
of the Old World at least. While there is certainly not the slightest
archeological evidence to allow us to go that far, we may easily expect
that an early southeast Asian development would have been felt in
China. However, the appearance of the village-farming community in the
northwest of India, at least, seems to have depended on the earlier
development in the Near East. It is also probable that ideas of the new
way of life moved well beyond Khartoum in Africa.
THE SPREAD OF THE VILLAGE-FARMING COMMUNITY WAY OF LIFE INTO EUROPE
How about Europe? I wont give you many details. You can easily imagine
that the late prehistoric prelude to European history is a complicated
affair. We all know very well how complicated an area Europe is now,
with its welter of different languages and cultures. Remember, however,
that a great deal of archeology has been done on the late prehistory of
Europe, and very little on that of further Asia and Africa. If we knew
as much about these areas as we do of Europe, I expect wed find them
just as complicated.
This much is clear for Europe, as far as the spread of the
village-community way of life is concerned. The general idea and much
of the know-how and the basic tools of food-production moved from the
Near East to Europe. So did the plants and animals which had been
domesticated; they were not naturally at home in Europe, as they were
in western Asia. I do not, of course, mean that there were traveling
salesmen who carried these ideas and things to Europe with a commercial
gleam in their eyes. The process took time, and the ideas and things
must have been passed on from one group of people to the next. There
was also some actual movement of peoples, but we dont know the size of
the groups that moved.
The story of the colonization of Europe by the first farmers is
thus one of (1) the movement from the eastern Mediterranean lands
of some people who were farmers; (2) the spread of ideas and things
beyond the Near East itself and beyond the paths along which the
colonists moved; and (3) the adaptations of the ideas and things
by the indigenous Forest folk, about whose receptiveness Professor
Mathiassen speaks (p. 97). It is important to note that the resulting
cultures in the new European environment were European, not Near
Eastern. The late Professor Childe remarked that the peoples of the
West were not slavish imitators; they adapted the gifts from the East
... into a new and organic whole capable of developing on its own
original lines.
THE WAYS TO EUROPE
Suppose we want to follow the traces of those earliest village-farmers
who did travel from western Asia into Europe. Let us start from
Syro-Cilicia, that part of the hilly-flanks zone proper which lies in
the very northeastern corner of the Mediterranean. Three ways would be
open to us (of course we could not be worried about permission from the
Soviet authorities!). We would go north, or north and slightly east,
across Anatolian Turkey, and skirt along either shore of the Black Sea
or even to the east of the Caucasus Mountains along the Caspian Sea,
to reach the plains of Ukrainian Russia. From here, we could march
across eastern Europe to the Baltic and Scandinavia, or even hook back
southwestward to Atlantic Europe.
Our second way from Syro-Cilicia would also lie over Anatolia, to the
northwest, where we would have to swim or raft ourselves over the
Dardanelles or the Bosphorus to the European shore. Then we would bear
left toward Greece, but some of us might turn right again in Macedonia,
going up the valley of the Vardar River to its divide and on down
the valley of the Morava beyond, to reach the Danube near Belgrade
in Jugoslavia. Here we would turn left, following the great river
valley of the Danube up into central Europe. We would have a number of
tributary valleys to explore, or we could cross the divide and go down
the valley of the Rhine to the North Sea.
Our third way from Syro-Cilicia would be by sea. We would coast along
southern Anatolia and visit Cyprus, Crete, and the Aegean islands on
our way to Greece, where, in the north, we might meet some of those who
had taken the second route. From Greece, we would sail on to Italy and
the western isles, to reach southern France and the coasts of Spain.
Eventually a few of us would sail up the Atlantic coast of Europe, to
reach western Britain and even Ireland.
[Illustration: PROBABLE ROUTES AND TIMING IN THE SPREAD OF THE
VILLAGE-FARMING COMMUNITY WAY OF LIFE FROM THE NEAR EAST TO EUROPE]
Of course none of us could ever take these journeys as the first
farmers took them, since the whole course of each journey must have
lasted many lifetimes. The date given to the assemblage called Windmill
Hill, the earliest known trace of village-farming communities in
England, is about 2500 B.C. I would expect about 5500 B.C. to be a
safe date to give for the well-developed early village communities of
Syro-Cilicia. We suspect that the spread throughout Europe did not
proceed at an even rate. Professor Piggott writes that at a date
probably about 2600 B.C., simple agricultural communities were being
established in Spain and southern France, and from the latter region a
spread northwards can be traced ... from points on the French seaboard
of the [English] Channel ... there were emigrations of a certain number
of these tribes by boat, across to the chalk lands of Wessex and Sussex
[in England], probably not more than three or four generations later
than the formation of the south French colonies.
New radiocarbon determinations are becoming available all the
time--already several suggest that the food-producing way of life
had reached the lower Rhine and Holland by 4000 B.C. But not all
prehistorians accept these dates, so I do not show them on my map
(p. 139).
THE EARLIEST FARMERS OF ENGLAND
To describe the later prehistory of all Europe for you would take
another book and a much larger one than this is. Therefore, I have
decided to give you only a few impressions of the later prehistory of
Britain. Of course the British Isles lie at the other end of Europe
from our base-line in western Asia. Also, they received influences
along at least two of the three ways in which the new way of life
moved into Europe. We will look at more of their late prehistory in a
following chapter: here, I shall speak only of the first farmers.
The assemblage called Windmill Hill, which appears in the south of
England, exhibits three different kinds of structures, evidence of
grain-growing and of stock-breeding, and some distinctive types of
pottery and stone implements. The most remarkable type of structure
is the earthwork enclosures which seem to have served as seasonal
cattle corrals. These enclosures were roughly circular, reached over
a thousand feet in diameter, and sometimes included two or three
concentric sets of banks and ditches. Traces of oblong timber houses
have been found, but not within the enclosures. The second type of
structure is mine-shafts, dug down into the chalk beds where good
flint for the making of axes or hoes could be found. The third type
of structure is long simple mounds or unchambered barrows, in one
end of which burials were made. It has been commonly believed that the
Windmill Hill assemblage belonged entirely to the cultural tradition
which moved up through France to the Channel. Professor Piggott is now
convinced, however, that important elements of Windmill Hill stem from
northern Germany and Denmark--products of the first way into Europe
from the east.
The archeological traces of a second early culture are to be found
in the west of England, western and northern Scotland, and most of
Ireland. The bearers of this culture had come up the Atlantic coast
by sea from southern France and Spain. The evidence they have left us
consists mainly of tombs and the contents of tombs, with only very
rare settlement sites. The tombs were of some size and received the
bodies of many people. The tombs themselves were built of stone, heaped
over with earth; the stones enclosed a passage to a central chamber
(passage graves), or to a simple long gallery, along the sides of
which the bodies were laid (gallery graves). The general type of
construction is called megalithic (= great stone), and the whole
earth-mounded structure is often called a _barrow_. Since many have
proper chambers, in one sense or another, we used the term unchambered
barrow above to distinguish those of the Windmill Hill type from these
megalithic structures. There is some evidence for sacrifice, libations,
and ceremonial fires, and it is clear that some form of community
ritual was focused on the megalithic tombs.
The cultures of the people who produced the Windmill Hill assemblage
and of those who made the megalithic tombs flourished, at least in
part, at the same time. Although the distributions of the two different
types of archeological traces are in quite different parts of the
country, there is Windmill Hill pottery in some of the megalithic
tombs. But the tombs also contain pottery which seems to have arrived
with the tomb builders themselves.
The third early British group of antiquities of this general time
(following 2500 B.C.) comes from sites in southern and eastern England.
It is not so certain that the people who made this assemblage, called
Peterborough, were actually farmers. While they may on occasion have
practiced a simple agriculture, many items of their assemblage link
them closely with that of the Forest folk of earlier times in
England and in the Baltic countries. Their pottery is decorated with
impressions of cords and is quite different from that of Windmill Hill
and the megalithic builders. In addition, the distribution of their
finds extends into eastern Britain, where the other cultures have left
no trace. The Peterborough people had villages with semi-subterranean
huts, and the bones of oxen, pigs, and sheep have been found in a few
of these. On the whole, however, hunting and fishing seem to have been
their vital occupations. They also established trade routes especially
to acquire the raw material for stone axes.
A probably slightly later culture, whose traces are best known from
Skara Brae on Orkney, also had its roots in those cultures of the
Baltic area which fused out of the meeting of the Forest folk and
the peoples who took the eastern way into Europe. Skara Brae is very
well preserved, having been built of thin stone slabs about which
dune-sand drifted after the village died. The individual houses, the
bedsteads, the shelves, the chests for clothes and oddments--all built
of thin stone-slabs--may still be seen in place. But the Skara Brae
people lived entirely by sheep- and cattle-breeding, and by catching
shellfish. Neither grain nor the instruments of agriculture appeared at
Skara Brae.
THE EUROPEAN ACHIEVEMENT
The above is only a very brief description of what went on in Britain
with the arrival of the first farmers. There are many interesting
details which I have omitted in order to shorten the story.
I believe some of the difficulty we have in understanding the
establishment of the first farming communities in Europe is with
the word colonization. We have a natural tendency to think of
colonization as it has happened within the last few centuries. In the
case of the colonization of the Americas, for example, the colonists
came relatively quickly, and in increasingly vast numbers. They had
vastly superior technical, political, and war-making skills, compared
with those of the Indians. There was not much mixing with the Indians.
The case in Europe five or six thousand years ago must have been very
different. I wonder if it is even proper to call people colonists
who move some miles to a new region, settle down and farm it for some
years, then move on again, generation after generation? The ideas and
the things which these new people carried were only _potentially_
superior. The ideas and things and the people had to prove themselves
in their adaptation to each new environment. Once this was done another
link to the chain would be added, and then the forest-dwellers and
other indigenous folk of Europe along the way might accept the new
ideas and things. It is quite reasonable to expect that there must have
been much mixture of the migrants and the indigenes along the way; the
Peterborough and Skara Brae assemblages we mentioned above would seem
to be clear traces of such fused cultures. Sometimes, especially if the
migrants were moving by boat, long distances may have been covered in
a short time. Remember, however, we seem to have about three thousand
years between the early Syro-Cilician villages and Windmill Hill.
Let me repeat Professor Childe again. The peoples of the West were
not slavish imitators: they adapted the gifts from the East ... into
a new and organic whole capable of developing on its own original
lines. Childe is of course completely conscious of the fact that his
peoples of the West were in part the descendants of migrants who came
originally from the East, bringing their gifts with them. This
was the late prehistoric achievement of Europe--to take new ideas and
things and some migrant peoples and, by mixing them with the old in its
own environments, to forge a new and unique series of cultures.
What we know of the ways of men suggests to us that when the details
of the later prehistory of further Asia and Africa are learned, their
stories will be just as exciting.
THE Conquest of Civilization
[Illustration]
Now we must return to the Near East again. We are coming to the point
where history is about to begin. I am going to stick pretty close
to Iraq and Egypt in this chapter. These countries will perhaps be
the most interesting to most of us, for the foundations of western
civilization were laid in the river lands of the Tigris and Euphrates
and of the Nile. I shall probably stick closest of all to Iraq, because
things first happened there and also because I know it best.
There is another interesting thing, too. We have seen that the first
experiment in village-farming took place in the Near East. So did
the first experiment in civilization. Both experiments took. The
traditions we live by today are based, ultimately, on those ancient
beginnings in food-production and civilization in the Near East.
WHAT CIVILIZATION MEANS
I shall not try to define civilization for you; rather, I shall
tell you what the word brings to my mind. To me civilization means
urbanization: the fact that there are cities. It means a formal
political set-up--that there are kings or governing bodies that the
people have set up. It means formal laws--rules of conduct--which the
government (if not the people) believes are necessary. It probably
means that there are formalized projects--roads, harbors, irrigation
canals, and the like--and also some sort of army or police force
to protect them. It means quite new and different art forms. It
also usually means there is writing. (The people of the Andes--the
Incas--had everything which goes to make up a civilization but formal
writing. I can see no reason to say they were not civilized.) Finally,
as the late Professor Redfield reminded us, civilization seems to bring
with it the dawn of a new kind of moral order.
In different civilizations, there may be important differences in the
way such things as the above are managed. In early civilizations, it is
usual to find religion very closely tied in with government, law, and
so forth. The king may also be a high priest, or he may even be thought
of as a god. The laws are usually thought to have been given to the
people by the gods. The temples are protected just as carefully as the
other projects.
CIVILIZATION IMPOSSIBLE WITHOUT FOOD-PRODUCTION
Civilizations have to be made up of many people. Some of the people
live in the country; some live in very large towns or cities. Classes
of society have begun. There are officials and government people; there
are priests or religious officials; there are merchants and traders;
there are craftsmen, metal-workers, potters, builders, and so on; there
are also farmers, and these are the people who produce the food for the
whole population. It must be obvious that civilization cannot exist
without food-production and that food-production must also be at a
pretty efficient level of village-farming before civilization can even
begin.
But people can be food-producing without being civilized. In many
parts of the world this is still the case. When the white men first
came to America, the Indians in most parts of this hemisphere were
food-producers. They grew corn, potatoes, tomatoes, squash, and many
other things the white men had never eaten before. But only the Aztecs
of Mexico, the Mayas of Yucatan and Guatemala, and the Incas of the
Andes were civilized.
WHY DIDNT CIVILIZATION COME TO ALL FOOD-PRODUCERS?
Once you have food-production, even at the well-advanced level of
the village-farming community, what else has to happen before you
get civilization? Many men have asked this question and have failed
to give a full and satisfactory answer. There is probably no _one_
answer. I shall give you my own idea about how civilization _may_ have
come about in the Near East alone. Remember, it is only a guess--a
putting together of hunches from incomplete evidence. It is _not_ meant
to explain how civilization began in any of the other areas--China,
southeast Asia, the Americas--where other early experiments in
civilization went on. The details in those areas are quite different.
Whether certain general principles hold, for the appearance of any
early civilization, is still an open and very interesting question.
WHERE CIVILIZATION FIRST APPEARED IN THE NEAR EAST
You remember that our earliest village-farming communities lay along
the hilly flanks of a great crescent. (See map on p. 125.)
Professor Breasteds fertile crescent emphasized the rich river
valleys of the Nile and the Tigris-Euphrates Rivers. Our hilly-flanks
area of the crescent zone arches up from Egypt through Palestine and
Syria, along southern Turkey into northern Iraq, and down along the
southwestern fringe of Iran. The earliest food-producing villages we
know already existed in this area by about 6750 B.C. ( 200 years).
Now notice that this hilly-flanks zone does not include southern
Mesopotamia, the alluvial land of the lower Tigris and Euphrates in
Iraq, or the Nile Valley proper. The earliest known villages of classic
Mesopotamia and Egypt seem to appear fifteen hundred or more years
after those of the hilly-flanks zone. For example, the early Fayum
village which lies near a lake west of the Nile Valley proper (see p.
135) has a radiocarbon date of 4275 B.C. 320 years. It was in the
river lands, however, that the immediate beginnings of civilization
were made.
We know that by about 3200 B.C. the Early Dynastic period had begun
in southern Mesopotamia. The beginnings of writing go back several
hundred years earlier, but we can safely say that civilization had
begun in Mesopotamia by 3200 B.C. In Egypt, the beginning of the First
Dynasty is slightly later, at about 3100 B.C., and writing probably
did not appear much earlier. There is no question but that history and
civilization were well under way in both Mesopotamia and Egypt by 3000
B.C.--about five thousand years ago.
THE HILLY-FLANKS ZONE VERSUS THE RIVER LANDS
Why did these two civilizations spring up in these two river
lands which apparently were not even part of the area where the
village-farming community began? Why didnt we have the first
civilizations in Palestine, Syria, north Iraq, or Iran, where were
sure food-production had had a long time to develop? I think the
probable answer gives a clue to the ways in which civilization began in
Egypt and Mesopotamia.
The land in the hilly flanks is of a sort which people can farm without
too much trouble. There is a fairly fertile coastal strip in Palestine
and Syria. There are pleasant mountain slopes, streams running out to
the sea, and rain, at least in the winter months. The rain belt and the
foothills of the Turkish mountains also extend to northern Iraq and on
to the Iranian plateau. The Iranian plateau has its mountain valleys,
streams, and some rain. These hilly flanks of the crescent, through
most of its arc, are almost made-to-order for beginning farmers. The
grassy slopes of the higher hills would be pasture for their herds
and flocks. As soon as the earliest experiments with agriculture and
domestic animals had been successful, a pleasant living could be
made--and without too much trouble.
I should add here again, that our evidence points increasingly to a
climate for those times which is very little different from that for
the area today. Now look at Egypt and southern Mesopotamia. Both are
lands without rain, for all intents and purposes. Both are lands with
rivers that have laid down very fertile soil--soil perhaps superior to
that in the hilly flanks. But in both lands, the rivers are of no great
aid without some control.
The Nile floods its banks once a year, in late September or early
October. It not only soaks the narrow fertile strip of land on either
side; it lays down a fresh layer of new soil each year. Beyond the
fertile strip on either side rise great cliffs, and behind them is the
desert. In its natural, uncontrolled state, the yearly flood of the
Nile must have caused short-lived swamps that were full of crocodiles.
After a short time, the flood level would have dropped, the water and
the crocodiles would have run back into the river, and the swamp plants
would have become parched and dry.
The Tigris and the Euphrates of Mesopotamia are less likely to flood
regularly than the Nile. The Tigris has a shorter and straighter course
than the Euphrates; it is also the more violent river. Its banks are
high, and when the snows melt and flow into all of its tributary rivers
it is swift and dangerous. The Euphrates has a much longer and more
curving course and few important tributaries. Its banks are lower and
it is less likely to flood dangerously. The land on either side and
between the two rivers is very fertile, south of the modern city of
Baghdad. Unlike the Nile Valley, neither the Tigris nor the Euphrates
is flanked by cliffs. The land on either side of the rivers stretches
out for miles and is not much rougher than a poor tennis court.
THE RIVERS MUST BE CONTROLLED
The real trick in both Egypt and Mesopotamia is to make the rivers work
for you. In Egypt, this is a matter of building dikes and reservoirs
that will catch and hold the Nile flood. In this way, the water is held
and allowed to run off over the fields as it is needed. In Mesopotamia,
it is a matter of taking advantage of natural river channels and branch
channels, and of leading ditches from these onto the fields.
Obviously, we can no longer find the first dikes or reservoirs of
the Nile Valley, or the first canals or ditches of Mesopotamia. The
same land has been lived on far too long for any traces of the first
attempts to be left; or, especially in Egypt, it has been covered by
the yearly deposits of silt, dropped by the river floods. But were
pretty sure the first food-producers of Egypt and southern Mesopotamia
must have made such dikes, canals, and ditches. In the first place,
there cant have been enough rain for them to grow things otherwise.
In the second place, the patterns for such projects seem to have been
pretty well set by historic times.
CONTROL OF THE RIVERS THE BUSINESS OF EVERYONE
Here, then, is a _part_ of the reason why civilization grew in Egypt
and Mesopotamia first--not in Palestine, Syria, or Iran. In the latter
areas, people could manage to produce their food as individuals. It
wasnt too hard; there were rain and some streams, and good pasturage
for the animals even if a crop or two went wrong. In Egypt and
Mesopotamia, people had to put in a much greater amount of work, and
this work couldnt be individual work. Whole villages or groups of
people had to turn out to fix dikes or dig ditches. The dikes had to be
repaired and the ditches carefully cleared of silt each year, or they
would become useless.
There also had to be hard and fast rules. The person who lived nearest
the ditch or the reservoir must not be allowed to take all the water
and leave none for his neighbors. It was not only a business of
learning to control the rivers and of making their waters do the
farmers work. It also meant controlling men. But once these men had
managed both kinds of controls, what a wonderful yield they had! The
soil was already fertile, and the silt which came in the floods and
ditches kept adding fertile soil.
THE GERM OF CIVILIZATION IN EGYPT AND MESOPOTAMIA
This learning to work together for the common good was the real germ of
the Egyptian and the Mesopotamian civilizations. The bare elements of
civilization were already there: the need for a governing hand and for
laws to see that the communities work was done and that the water was
justly shared. You may object that there is a sort of chicken and egg
paradox in this idea. How could the people set up the rules until they
had managed to get a way to live, and how could they manage to get a
way to live until they had set up the rules? I think that small groups
must have moved down along the mud-flats of the river banks quite
early, making use of naturally favorable spots, and that the rules grew
out of such cases. It would have been like the hand-in-hand growth of
automobiles and paved highways in the United States.
Once the rules and the know-how did get going, there must have been a
constant interplay of the two. Thus, the more the crops yielded, the
richer and better-fed the people would have been, and the more the
population would have grown. As the population grew, more land would
have needed to be flooded or irrigated, and more complex systems of
dikes, reservoirs, canals, and ditches would have been built. The more
complex the system, the more necessity for work on new projects and for
the control of their use.... And so on....
What I have just put down for you is a guess at the manner of growth of
some of the formalized systems that go to make up a civilized society.
My explanation has been pointed particularly at Egypt and Mesopotamia.
I have already told you that the irrigation and water-control part of
it does not apply to the development of the Aztecs or the Mayas, or
perhaps anybody else. But I think that a fair part of the story of
Egypt and Mesopotamia must be as Ive just told you.
I am particularly anxious that you do _not_ understand me to mean that
irrigation _caused_ civilization. I am sure it was not that simple at
all. For, in fact, a complex and highly engineered irrigation system
proper did not come until later times. Lets say rather that the simple
beginnings of irrigation allowed and in fact encouraged a great number
of things in the technological, political, social, and moral realms of
culture. We do not yet understand what all these things were or how
they worked. But without these other aspects of culture, I do not
think that urbanization and civilization itself could have come into
being.
THE ARCHEOLOGICAL SEQUENCE TO CIVILIZATION IN IRAQ
We last spoke of the archeological materials of Iraq on page 130,
where I described the village-farming community of Hassunan type. The
Hassunan type villages appear in the hilly-flanks zone and in the
rolling land adjacent to the Tigris in northern Iraq. It is probable
that even before the Hassuna pattern of culture lived its course, a
new assemblage had been established in northern Iraq and Syria. This
assemblage is called Halaf, after a site high on a tributary of the
Euphrates, on the Syro-Turkish border.
[Illustration: SKETCH OF SELECTED ITEMS OF HALAFIAN ASSEMBLAGE
BEADS AND PENDANTS
POTTERY MOTIFS
POTTERY]
The Halafian assemblage is incompletely known. The culture it
represents included a remarkably handsome painted pottery.
Archeologists have tended to be so fascinated with this pottery that
they have bothered little with the rest of the Halafian assemblage. We
do know that strange stone-founded houses, with plans like those of the
popular notion of an Eskimo igloo, were built. Like the pottery of the
Samarran style, which appears as part of the Hassunan assemblage (see
p. 131), the Halafian painted pottery implies great concentration and
excellence of draftsmanship on the part of the people who painted it.
We must mention two very interesting sites adjacent to the mud-flats of
the rivers, half way down from northern Iraq to the classic alluvial
Mesopotamian area. One is Baghouz on the Euphrates; the other is
Samarra on the Tigris (see map, p. 125). Both these sites yield the
handsome painted pottery of the style called Samarran: in fact it
is Samarra which gives its name to the pottery. Neither Baghouz nor
Samarra have completely Hassunan types of assemblages, and at Samarra
there are a few pots of proper Halafian style. I suppose that Samarra
and Baghouz give us glimpses of those early farmers who had begun to
finger their way down the mud-flats of the river banks toward the
fertile but yet untilled southland.
CLASSIC SOUTHERN MESOPOTAMIA FIRST OCCUPIED
Our next step is into the southland proper. Here, deep in the core of
the mound which later became the holy Sumerian city of Eridu, Iraqi
archeologists uncovered a handsome painted pottery. Pottery of the same
type had been noticed earlier by German archeologists on the surface
of a small mound, awash in the spring floods, near the remains of the
Biblical city of Erich (Sumerian = Uruk; Arabic = Warka). This Eridu
pottery, which is about all we have of the assemblage of the people who
once produced it, may be seen as a blend of the Samarran and Halafian
painted pottery styles. This may over-simplify the case, but as yet we
do not have much evidence to go on. The idea does at least fit with my
interpretation of the meaning of Baghouz and Samarra as way-points on
the mud-flats of the rivers half way down from the north.
My colleague, Robert Adams, believes that there were certainly
riverine-adapted food-collectors living in lower Mesopotamia. The
presence of such would explain why the Eridu assemblage is not simply
the sum of the Halafian and Samarran assemblages. But the domesticated
plants and animals and the basic ways of food-production must have
come from the hilly-flanks country in the north.
Above the basal Eridu levels, and at a number of other sites in the
south, comes a full-fledged assemblage called Ubaid. Incidentally,
there is an aspect of the Ubaidian assemblage in the north as well. It
seems to move into place before the Halaf manifestation is finished,
and to blend with it. The Ubaidian assemblage in the south is by far
the more spectacular. The development of the temple has been traced
at Eridu from a simple little structure to a monumental building some
62 feet long, with a pilaster-decorated faade and an altar in its
central chamber. There is painted Ubaidian pottery, but the style is
hurried and somewhat careless and gives the _impression_ of having been
a cheap mass-production means of decoration when compared with the
carefully drafted styles of Samarra and Halaf. The Ubaidian people made
other items of baked clay: sickles and axes of very hard-baked clay
are found. The northern Ubaidian sites have yielded tools of copper,
but metal tools of unquestionable Ubaidian find-spots are not yet
available from the south. Clay figurines of human beings with monstrous
turtle-like faces are another item in the southern Ubaidian assemblage.
[Illustration: SKETCH OF SELECTED ITEMS OF UBAIDIAN ASSEMBLAGE]
There is a large Ubaid cemetery at Eridu, much of it still awaiting
excavation. The few skeletons so far tentatively studied reveal a
completely modern type of Mediterraneanoid; the individuals whom the
skeletons represent would undoubtedly blend perfectly into the modern
population of southern Iraq. What the Ubaidian assemblage says to us is
that these people had already adapted themselves and their culture to
the peculiar riverine environment of classic southern Mesopotamia. For
example, hard-baked clay axes will chop bundles of reeds very well, or
help a mason dress his unbaked mud bricks, and there were only a few
soft and pithy species of trees available. The Ubaidian levels of Eridu
yield quantities of date pits; that excellent and characteristically
Iraqi fruit was already in use. The excavators also found the clay
model of a ship, with the stepping-point for a mast, so that Sinbad the
Sailor must have had his antecedents as early as the time of Ubaid.
The bones of fish, which must have flourished in the larger canals as
well as in the rivers, are common in the Ubaidian levels and thereafter.
THE UBAIDIAN ACHIEVEMENT
On present evidence, my tendency is to see the Ubaidian assemblage
in southern Iraq as the trace of a new era. I wish there were more
evidence, but what we have suggests this to me. The culture of southern
Ubaid soon became a culture of towns--of centrally located towns with
some rural villages about them. The town had a temple and there must
have been priests. These priests probably had political and economic
functions as well as religious ones, if the somewhat later history of
Mesopotamia may suggest a pattern for us. Presently the temple and its
priesthood were possibly the focus of the market; the temple received
its due, and may already have had its own lands and herds and flocks.
The people of the town, undoubtedly at least in consultation with the
temple administration, planned and maintained the simple irrigation
ditches. As the system flourished, the community of rural farmers would
have produced more than sufficient food. The tendency for specialized
crafts to develop--tentative at best at the cultural level of the
earlier village-farming community era--would now have been achieved,
and probably many other specialists in temple administration, water
control, architecture, and trade would also have appeared, as the
surplus food-supply was assured.
Southern Mesopotamia is not a land rich in natural resources other
than its fertile soil. Stone, good wood for construction, metal, and
innumerable other things would have had to be imported. Grain and
dates--although both are bulky and difficult to transport--and wool and
woven stuffs must have been the mediums of exchange. Over what area did
the trading net-work of Ubaid extend? We start with the idea that the
Ubaidian assemblage is most richly developed in the south. We assume, I
think, correctly, that it represents a cultural flowering of the south.
On the basis of the pottery of the still elusive Eridu immigrants
who had first followed the rivers into alluvial Mesopotamia, we get
the notion that the characteristic painted pottery style of Ubaid
was developed in the southland. If this reconstruction is correct
then we may watch with interest where the Ubaid pottery-painting
tradition spread. We have already mentioned that there is a substantial
assemblage of (and from the southern point of view, _fairly_ pure)
Ubaidian material in northern Iraq. The pottery appears all along the
Iranian flanks, even well east of the head of the Persian Gulf, and
ends in a later and spectacular flourish in an extremely handsome
painted style called the Susa style. Ubaidian pottery has been noted
up the valleys of both of the great rivers, well north of the Iraqi
and Syrian borders on the southern flanks of the Anatolian plateau.
It reaches the Mediterranean Sea and the valley of the Orontes in
Syria, and it may be faintly reflected in the painted style of a
site called Ghassul, on the east bank of the Jordan in the Dead Sea
Valley. Over this vast area--certainly in all of the great basin of
the Tigris-Euphrates drainage system and its natural extensions--I
believe we may lay our fingers on the traces of a peculiar way of
decorating pottery, which we call Ubaidian. This cursive and even
slap-dash decoration, it appears to me, was part of a new cultural
tradition which arose from the adjustments which immigrant northern
farmers first made to the new and challenging environment of southern
Mesopotamia. But exciting as the idea of the spread of influences of
the Ubaid tradition in space may be, I believe you will agree that the
consequences of the growth of that tradition in southern Mesopotamia
itself, as time passed, are even more important.
THE WARKA PHASE IN THE SOUTH
So far, there are only two radiocarbon determinations for the Ubaidian
assemblage, one from Tepe Gawra in the north and one from Warka in the
south. My hunch would be to use the dates 4500 to 3750 B.C., with a
plus or more probably a minus factor of about two hundred years for
each, as the time duration of the Ubaidian assemblage in southern
Mesopotamia.
Next, much to our annoyance, we have what is almost a temporary
black-out. According to the system of terminology I favor, our next
assemblage after that of Ubaid is called the _Warka_ phase, from
the Arabic name for the site of Uruk or Erich. We know it only from
six or seven levels in a narrow test-pit at Warka, and from an even
smaller hole at another site. This assemblage, so far, is known only
by its pottery, some of which still bears Ubaidian style painting. The
characteristic Warkan pottery is unpainted, with smoothed red or gray
surfaces and peculiar shapes. Unquestionably, there must be a great
deal more to say about the Warkan assemblage, but someone will first
have to excavate it!
THE DAWN OF CIVILIZATION
After our exasperation with the almost unknown Warka interlude,
following the brilliant false dawn of Ubaid, we move next to an
assemblage which yields traces of a preponderance of those elements
which we noted (p. 144) as meaning civilization. This assemblage
is that called _Proto-Literate_; it already contains writing. On
the somewhat shaky principle that writing, however early, means
history--and no longer prehistory--the assemblage is named for the
historical implications of its content, and no longer after the name of
the site where it was first found. Since some of the older books used
site-names for this assemblage, I will tell you that the Proto-Literate
includes the latter half of what used to be called the Uruk period
_plus_ all of what used to be called the Jemdet Nasr period. It shows
a consistent development from beginning to end.
I shall, in fact, leave much of the description and the historic
implications of the Proto-Literate assemblage to the conventional
historians. Professor T. J. Jacobsen, reaching backward from the
legends he finds in the cuneiform writings of slightly later times, can
in fact tell you a more complete story of Proto-Literate culture than
I can. It should be enough here if I sum up briefly what the excavated
archeological evidence shows.
We have yet to dig a Proto-Literate site in its entirety, but the
indications are that the sites cover areas the size of small cities.
In architecture, we know of large and monumental temple structures,
which were built on elaborate high terraces. The plans and decoration
of these temples follow the pattern set in the Ubaid phase: the chief
difference is one of size. The German excavators at the site of Warka
reckoned that the construction of only one of the Proto-Literate temple
complexes there must have taken 1,500 men, each working a ten-hour day,
five years to build.
ART AND WRITING
If the architecture, even in its monumental forms, can be seen to
stem from Ubaidian developments, this is not so with our other
evidence of Proto-Literate artistic expression. In relief and applied
sculpture, in sculpture in the round, and on the engraved cylinder
seals--all of which now make their appearance--several completely
new artistic principles are apparent. These include the composition
of subject-matter in groups, commemorative scenes, and especially
the ability and apparent desire to render the human form and face.
Excellent as the animals of the Franco-Cantabrian art may have been
(see p. 85), and however handsome were the carefully drafted
geometric designs and conventionalized figures on the pottery of the
early farmers, there seems to have been, up to this time, a mental
block about the drawing of the human figure and especially the human
face. We do not yet know what caused this self-consciousness about
picturing themselves which seems characteristic of men before the
appearance of civilization. We do know that with civilization, the
mental block seems to have been removed.
Clay tablets bearing pictographic signs are the Proto-Literate
forerunners of cuneiform writing. The earliest examples are not well
understood but they seem to be devices for making accounts and
for remembering accounts. Different from the later case in Egypt,
where writing appears fully formed in the earliest examples, the
development from simple pictographic signs to proper cuneiform writing
may be traced, step by step, in Mesopotamia. It is most probable
that the development of writing was connected with the temple and
the need for keeping account of the temples possessions. Professor
Jacobsen sees writing as a means for overcoming space, time, and the
increasing complications of human affairs: Literacy, which began
with ... civilization, enhanced mightily those very tendencies in its
development which characterize it as a civilization and mark it off as
such from other types of culture.
[Illustration: RELIEF ON A PROTO-LITERATE STONE VASE, WARKA
Unrolled drawing, with restoration suggested by figures from
contemporary cylinder seals]
While the new principles in art and the idea of writing are not
foreshadowed in the Ubaid phase, or in what little we know of the
Warkan, I do not think we need to look outside southern Mesopotamia
for their beginnings. We do know something of the adjacent areas,
too, and these beginnings are not there. I think we must accept them
as completely new discoveries, made by the people who were developing
the whole new culture pattern of classic southern Mesopotamia. Full
description of the art, architecture, and writing of the Proto-Literate
phase would call for many details. Men like Professor Jacobsen and Dr.
Adams can give you these details much better than I can. Nor shall I do
more than tell you that the common pottery of the Proto-Literate phase
was so well standardized that it looks factory made. There was also
some handsome painted pottery, and there were stone bowls with inlaid
decoration. Well-made tools in metal had by now become fairly common,
and the metallurgist was experimenting with the casting process. Signs
for plows have been identified in the early pictographs, and a wheeled
chariot is shown on a cylinder seal engraving. But if I were forced to
a guess in the matter, I would say that the development of plows and
draft-animals probably began in the Ubaid period and was another of the
great innovations of that time.
The Proto-Literate assemblage clearly suggests a highly developed and
sophisticated culture. While perhaps not yet fully urban, it is on
the threshold of urbanization. There seems to have been a very dense
settlement of Proto-Literate sites in classic southern Mesopotamia,
many of them newly founded on virgin soil where no earlier settlements
had been. When we think for a moment of what all this implies, of the
growth of an irrigation system which must have existed to allow the
flourish of this culture, and of the social and political organization
necessary to maintain the irrigation system, I think we will agree that
at last we are dealing with civilization proper.
FROM PREHISTORY TO HISTORY
Now it is time for the conventional ancient historians to take over
the story from me. Remember this when you read what they write. Their
real base-line is with cultures ruled over by later kings and emperors,
whose writings describe military campaigns and the administration of
laws and fully organized trading ventures. To these historians, the
Proto-Literate phase is still a simple beginning for what is to follow.
If they mention the Ubaid assemblage at all--the one I was so lyrical
about--it will be as some dim and fumbling step on the path to the
civilized way of life.
I suppose you could say that the difference in the approach is that as
a prehistorian I have been looking forward or upward in time, while the
historians look backward to glimpse what Ive been describing here. My
base-line was half a million years ago with a being who had little more
than the capacity to make tools and fire to distinguish him from the
animals about him. Thus my point of view and that of the conventional
historian are bound to be different. You will need both if you want to
understand all of the story of men, as they lived through time to the
present.
End of PREHISTORY
[Illustration]
Youll doubtless easily recall your general course in ancient history:
how the Sumerian dynasties of Mesopotamia were supplanted by those of
Babylonia, how the Hittite kingdom appeared in Anatolian Turkey, and
about the three great phases of Egyptian history. The literate kingdom
of Crete arose, and by 1500 B.C. there were splendid fortified Mycenean
towns on the mainland of Greece. This was the time--about the whole
eastern end of the Mediterranean--of what Professor Breasted called the
first great internationalism, with flourishing trade, international
treaties, and royal marriages between Egyptians, Babylonians, and
Hittites. By 1200 B.C., the whole thing had fragmented: the peoples of
the sea were restless in their isles, and the great ancient centers in
Egypt, Mesopotamia, and Anatolia were eclipsed. Numerous smaller states
arose--Assyria, Phoenicia, Israel--and the Trojan war was fought.
Finally Assyria became the paramount power of all the Near East,
presently to be replaced by Persia.
A new culture, partaking of older west Asiatic and Egyptian elements,
but casting them with its own tradition into a new mould, arose in
mainland Greece.
I once shocked my Classical colleagues to the core by referring to
Greece as a second degree derived civilization, but there is much
truth in this. The principles of bronze- and then of iron-working, of
the alphabet, and of many other elements in Greek culture were borrowed
from western Asia. Our debt to the Greeks is too well known for me even
to mention it, beyond recalling to you that it is to Greece we owe the
beginnings of rational or empirical science and thought in general. But
Greece fell in its turn to Rome, and in 55 B.C. Caesar invaded Britain.
I last spoke of Britain on page 142; I had chosen it as my single
example for telling you something of how the earliest farming
communities were established in Europe. Now I will continue with
Britains later prehistory, so you may sense something of the end of
prehistory itself. Remember that Britain is simply a single example
we select; the same thing could be done for all the other countries
of Europe, and will be possible also, some day, for further Asia and
Africa. Remember, too, that prehistory in most of Europe runs on for
three thousand or more years _after_ conventional ancient history
begins in the Near East. Britain is a good example to use in showing
how prehistory ended in Europe. As we said earlier, it lies at the
opposite end of Europe from the area of highest cultural achievement in
those times, and should you care to read more of the story in detail,
you may do so in the English language.
METAL USERS REACH ENGLAND
We left the story of Britain with the peoples who made three different
assemblages--the Windmill Hill, the megalith-builders, and the
Peterborough--making adjustments to their environments, to the original
inhabitants of the island, and to each other. They had first arrived
about 2500 B.C., and were simple pastoralists and hoe cultivators who
lived in little village communities. Some of them planted little if any
grain. By 2000 B.C., they were well settled in. Then, somewhere in the
range from about 1900 to 1800 B.C., the traces of the invasion of a new
series of peoples began to appear.
The first newcomers are called the Beaker folk, after the name of a
peculiar form of pottery they made. The beaker type of pottery seems
oldest in Spain, where it occurs with great collective tombs of
megalithic construction and with copper tools. But the Beaker folk who
reached England seem already to have moved first from Spain(?) to the
Rhineland and Holland. While in the Rhineland, and before leaving for
England, the Beaker folk seem to have mixed with the local population
and also with incomers from northeastern Europe whose culture included
elements brought originally from the Near East by the eastern way
through the steppes. This last group has also been named for a peculiar
article in its assemblage; the group is called the Battle-axe folk. A
few Battle-axe folk elements, including, in fact, stone battle-axes,
reached England with the earliest Beaker folk,[6] coming from the
Rhineland.
[6] The British authors use the term Beaker folk to mean both
archeological assemblage and human physical type. They speak
of a ... tall, heavy-boned, rugged, and round-headed strain
which they take to have developed, apparently in the Rhineland,
by a mixture of the original (Spanish?) beaker-makers and
the northeast European battle-axe makers. However, since the
science of physical anthropology is very much in flux at the
moment, and since I am not able to assess the evidence for these
physical types, I _do not_ use the term folk in this book with
its usual meaning of standardized physical type. When I use
folk here, I mean simply _the makers of a given archeological
assemblage_. The difficulty only comes when assemblages are
named for some item in them; it is too clumsy to make an
adjective of the item and refer to a beakerian assemblage.
The Beaker folk settled earliest in the agriculturally fertile south
and east. There seem to have been several phases of Beaker folk
invasions, and it is not clear whether these all came strictly from the
Rhineland or Holland. We do know that their copper daggers and awls
and armlets are more of Irish or Atlantic European than of Rhineland
origin. A few simple habitation sites and many burials of the Beaker
folk are known. They buried their dead singly, sometimes in conspicuous
individual barrows with the dead warrior in his full trappings. The
spectacular element in the assemblage of the Beaker folk is a group
of large circular monuments with ditches and with uprights of wood or
stone. These henges became truly monumental several hundred years
later; while they were occasionally dedicated with a burial, they were
not primarily tombs. The effect of the invasion of the Beaker folk
seems to cut across the whole fabric of life in Britain.
[Illustration: BEAKER]
There was, however, a second major element in British life at this
time. It shows itself in the less well understood traces of a group
again called after one of the items in their catalogue, the Food-vessel
folk. There are many burials in these food-vessel pots in northern
England, Scotland, and Ireland, and the pottery itself seems to
link back to that of the Peterborough assemblage. Like the earlier
Peterborough people in the highland zone before them, the makers of
the food-vessels seem to have been heavily involved in trade. It is
quite proper to wonder whether the food-vessel pottery itself was made
by local women who were married to traders who were middlemen in the
transmission of Irish metal objects to north Germany and Scandinavia.
The belt of high, relatively woodless country, from southwest to
northeast, was already established as a natural route for inland trade.
MORE INVASIONS
About 1500 B.C., the situation became further complicated by the
arrival of new people in the region of southern England anciently
called Wessex. The traces suggest the Brittany coast of France as a
source, and the people seem at first to have been a small but heroic
group of aristocrats. Their heroes are buried with wealth and
ceremony, surrounded by their axes and daggers of bronze, their gold
ornaments, and amber and jet beads. These rich finds show that the
trade-linkage these warriors patronized spread from the Baltic sources
of amber to Mycenean Greece or even Egypt, as evidenced by glazed blue
beads.
The great visual trace of Wessex achievement is the final form of
the spectacular sanctuary at Stonehenge. A wooden henge or circular
monument was first made several hundred years earlier, but the site
now received its great circles of stone uprights and lintels. The
diameter of the surrounding ditch at Stonehenge is about 350 feet, the
diameter of the inner circle of large stones is about 100 feet, and
the tallest stone of the innermost horseshoe-shaped enclosure is 29
feet 8 inches high. One circle is made of blue stones which must have
been transported from Pembrokeshire, 145 miles away as the crow flies.
Recently, many carvings representing the profile of a standard type of
bronze axe of the time, and several profiles of bronze daggers--one of
which has been called Mycenean in type--have been found carved in the
stones. We cannot, of course, describe the details of the religious
ceremonies which must have been staged in Stonehenge, but we can
certainly imagine the well-integrated and smoothly working culture
which must have been necessary before such a great monument could have
been built.
THIS ENGLAND
The range from 1900 to about 1400 B.C. includes the time of development
of the archeological features usually called the Early Bronze Age
in Britain. In fact, traces of the Wessex warriors persisted down to
about 1200 B.C. The main regions of the island were populated, and the
adjustments to the highland and lowland zones were distinct and well
marked. The different aspects of the assemblages of the Beaker folk and
the clearly expressed activities of the Food-vessel folk and the Wessex
warriors show that Britain was already taking on her characteristic
trading role, separated from the European continent but conveniently
adjacent to it. The tin of Cornwall--so important in the production
of good bronze--as well as the copper of the west and of Ireland,
taken with the gold of Ireland and the general excellence of Irish
metal work, assured Britain a traders place in the then known world.
Contacts with the eastern Mediterranean may have been by sea, with
Cornish tin as the attraction, or may have been made by the Food-vessel
middlemen on their trips to the Baltic coast. There they would have
encountered traders who traveled the great north-south European road,
by which Baltic amber moved southward to Greece and the Levant, and
ideas and things moved northward again.
There was, however, the Channel between England and Europe, and this
relative isolation gave some peace and also gave time for a leveling
and further fusion of culture. The separate cultural traditions began
to have more in common. The growing of barley, the herding of sheep and
cattle, and the production of woolen garments were already features
common to all Britains inhabitants save a few in the remote highlands,
the far north, and the distant islands not yet fully touched by
food-production. The personality of Britain was being formed.
CREMATION BURIALS BEGIN
Along with people of certain religious faiths, archeologists are
against cremation (for other people!). Individuals to be cremated seem
in past times to have been dressed in their trappings and put upon a
large pyre: it takes a lot of wood and a very hot fire for a thorough
cremation. When the burning had been completed, the few fragile scraps
of bone and such odd beads of stone or other rare items as had resisted
the great heat seem to have been whisked into a pot and the pot buried.
The archeologist is left with the pot and the unsatisfactory scraps in
it.
Tentatively, after about 1400 B.C. and almost completely over the whole
island by 1200 B.C., Britain became the scene of cremation burials
in urns. We know very little of the people themselves. None of their
settlements have been identified, although there is evidence that they
grew barley and made enclosures for cattle. The urns used for the
burials seem to have antecedents in the pottery of the Food-vessel
folk, and there are some other links with earlier British traditions.
In Lancashire, a wooden circle seems to have been built about a grave
with cremated burials in urns. Even occasional instances of cremation
may be noticed earlier in Britain, and it is not clear what, if any,
connection the British cremation burials in urns have with the classic
_Urnfields_ which were now beginning in the east Mediterranean and
which we shall mention below.
The British cremation-burial-in-urns folk survived a long time in the
highland zone. In the general British scheme, they make up what is
called the Middle Bronze Age, but in the highland zone they last
until after 900 B.C. and are considered to be a specialized highland
Late Bronze Age. In the highland zone, these later cremation-burial
folk seem to have continued the older Food-vessel tradition of being
middlemen in the metal market.
Granting that our knowledge of this phase of British prehistory is
very restricted because the cremations have left so little for the
archeologist, it does not appear that the cremation-burial-urn folk can
be sharply set off from their immediate predecessors. But change on a
grander scale was on the way.
REVERBERATIONS FROM CENTRAL EUROPE
In the centuries immediately following 1000 B.C., we see with fair
clarity two phases of a cultural process which must have been going
on for some time. Certainly several of the invasions we have already
described in this chapter were due to earlier phases of the same
cultural process, but we could not see the details.
[Illustration: SLASHING SWORD]
Around 1200 B.C. central Europe was upset by the spread of the
so-called Urnfield folk, who practiced cremation burial in urns and
whom we also know to have been possessors of long, slashing swords and
the horse. I told you above that we have no idea that the Urnfield
folk proper were in any way connected with the people who made
cremation-burial-urn cemeteries a century or so earlier in Britain. It
has been supposed that the Urnfield folk themselves may have shared
ideas with the people who sacked Troy. We know that the Urnfield
pressure from central Europe displaced other people in northern France,
and perhaps in northwestern Germany, and that this reverberated into
Britain about 1000 B.C.
Soon after 750 B.C., the same thing happened again. This time, the
pressure from central Europe came from the Hallstatt folk who were iron
tool makers: the reverberation brought people from the western Alpine
region across the Channel into Britain.
At first it is possible to see the separate results of these folk
movements, but the developing cultures soon fused with each other and
with earlier British elements. Presently there were also strains of
other northern and western European pottery and traces of Urnfield
practices themselves which appeared in the finished British product. I
hope you will sense that I am vastly over-simplifying the details.
The result seems to have been--among other things--a new kind of
agricultural system. The land was marked off by ditched divisions.
Rectangular fields imply the plow rather than hoe cultivation. We seem
to get a picture of estate or tribal boundaries which included village
communities; we find a variety of tools in bronze, and even whetstones
which show that iron has been honed on them (although the scarce iron
has not been found). Let me give you the picture in Professor S.
Piggotts words: The ... Late Bronze Age of southern England was but
the forerunner of the earliest Iron Age in the same region, not only in
the techniques of agriculture, but almost certainly in terms of ethnic
kinship ... we can with some assurance talk of the Celts ... the great
early Celtic expansion of the Continent is recognized to be that of the
Urnfield people.
Thus, certainly by 500 B.C., there were people in Britain, some of
whose descendants we may recognize today in name or language in remote
parts of Wales, Scotland, and the Hebrides.
THE COMING OF IRON
Iron--once the know-how of reducing it from its ore in a very hot,
closed fire has been achieved--produces a far cheaper and much more
efficient set of tools than does bronze. Iron tools seem first to
have been made in quantity in Hittite Anatolia about 1500 B.C. In
continental Europe, the earliest, so-called Hallstatt, iron-using
cultures appeared in Germany soon after 750 B.C. Somewhat later,
Greek and especially Etruscan exports of _objets dart_--which moved
with a flourishing trans-Alpine wine trade--influenced the Hallstatt
iron-working tradition. Still later new classical motifs, together with
older Hallstatt, oriental, and northern nomad motifs, gave rise to a
new style in metal decoration which characterizes the so-called La Tne
phase.
A few iron users reached Britain a little before 400 B.C. Not long
after that, a number of allied groups appeared in southern and
southeastern England. They came over the Channel from France and must
have been Celts with dialects related to those already in England. A
second wave of Celts arrived from the Marne district in France about
250 B.C. Finally, in the second quarter of the first century B.C.,
there were several groups of newcomers, some of whom were Belgae of
a mixed Teutonic-Celtic confederacy of tribes in northern France and
Belgium. The Belgae preceded the Romans by only a few years.
HILL-FORTS AND FARMS
The earliest iron-users seem to have entrenched themselves temporarily
within hill-top forts, mainly in the south. Gradually, they moved
inland, establishing _individual_ farm sites with extensive systems
of rectangular fields. We recognize these fields by the lynchets or
lines of soil-creep which plowing left on the slopes of hills. New
crops appeared; there were now bread wheat, oats, and rye, as well as
barley.
At Little Woodbury, near the town of Salisbury, a farmstead has been
rather completely excavated. The rustic buildings were within a
palisade, the round house itself was built of wood, and there were
various outbuildings and pits for the storage of grain. Weaving was
done on the farm, but not blacksmithing, which must have been a
specialized trade. Save for the lack of firearms, the place might
almost be taken for a farmstead on the American frontier in the early
1800s.
Toward 250 B.C. there seems to have been a hasty attempt to repair the
hill-forts and to build new ones, evidently in response to signs of
restlessness being shown by remote relatives in France.
THE SECOND PHASE
Perhaps the hill-forts were not entirely effective or perhaps a
compromise was reached. In any case, the newcomers from the Marne
district did establish themselves, first in the southeast and then to
the north and west. They brought iron with decoration of the La Tne
type and also the two-wheeled chariot. Like the Wessex warriors of
over a thousand years earlier, they made heroes graves, with their
warriors buried in the war-chariots and dressed in full trappings.
[Illustration: CELTIC BUCKLE]
The metal work of these Marnian newcomers is excellent. The peculiar
Celtic art style, based originally on the classic tendril motif,
is colorful and virile, and fits with Greek and Roman descriptions
of Celtic love of color in dress. There is a strong trace of these
newcomers northward in Yorkshire, linked by Ptolemys description to
the Parisii, doubtless part of the Celtic tribe which originally gave
its name to Paris on the Seine. Near Glastonbury, in Somerset, two
villages in swamps have been excavated. They seem to date toward the
middle of the first century B.C., which was a troubled time in Britain.
The circular houses were built on timber platforms surrounded with
palisades. The preservation of antiquities by the water-logged peat of
the swamp has yielded us a long catalogue of the materials of these
villagers.
In Scotland, which yields its first iron tools at a date of about 100
B.C., and in northern Ireland even slightly earlier, the effects of the
two phases of newcomers tend especially to blend. Hill-forts, brochs
(stone-built round towers) and a variety of other strange structures
seem to appear as the new ideas develop in the comparative isolation of
northern Britain.
THE THIRD PHASE
For the time of about the middle of the first century B.C., we again
see traces of frantic hill-fort construction. This simple military
architecture now took some new forms. Its multiple ramparts must
reflect the use of slings as missiles, rather than spears. We probably
know the reason. In 56 B.C., Julius Caesar chastised the Veneti of
Brittany for outraging the dignity of Roman ambassadors. The Veneti
were famous slingers, and doubtless the reverberations of escaping
Veneti were felt across the Channel. The military architecture suggests
that some Veneti did escape to Britain.
Also, through Caesar, we learn the names of newcomers who arrived in
two waves, about 75 B.C. and about 50 B.C. These were the Belgae. Now,
at last, we can even begin to speak of dynasties and individuals.
Some time before 55 B.C., the Catuvellauni, originally from the Marne
district in France, had possessed themselves of a large part of
southeastern England. They evidently sailed up the Thames and built a
town of over a hundred acres in area. Here ruled Cassivellaunus, the
first man in England whose name we know, and whose town Caesar sacked.
The town sprang up elsewhere again, however.
THE END OF PREHISTORY
Prehistory, strictly speaking, is now over in southern Britain.
Claudius effective invasion took place in 43 A.D.; by 83 A.D., a raid
had been made as far north as Aberdeen in Scotland. But by 127 A.D.,
Hadrian had completed his wall from the Solway to the Tyne, and the
Romans settled behind it. In Scotland, Romanization can have affected
the countryside very little. Professor Piggott adds that ... it is
when the pressure of Romanization is relaxed by the break-up of the
Dark Ages that we see again the Celtic metal-smiths handling their
material with the same consummate skill as they had before the Roman
Conquest, and with traditional styles that had not even then forgotten
their Marnian and Belgic heritage.
In fact, many centuries go by, in Britain as well as in the rest of
Europe, before the archeologists task is complete and the historian on
his own is able to describe the ways of men in the past.
BRITAIN AS A SAMPLE OF THE GENERAL COURSE OF PREHISTORY IN EUROPE
In giving this very brief outline of the later prehistory of Britain,
you will have noticed how often I had to refer to the European
continent itself. Britain, beyond the English Channel for all of her
later prehistory, had a much simpler course of events than did most of
the rest of Europe in later prehistoric times. This holds, in spite
of all the invasions and reverberations from the continent. Most
of Europe was the scene of an even more complicated ebb and flow of
cultural change, save in some of its more remote mountain valleys and
peninsulas.
The whole course of later prehistory in Europe is, in fact, so very
complicated that there is no single good book to cover it all;
certainly there is none in English. There are some good regional
accounts and some good general accounts of part of the range from about
3000 B.C. to A.D. 1. I suspect that the difficulty of making a good
book that covers all of its later prehistory is another aspect of what
makes Europe so very complicated a continent today. The prehistoric
foundations for Europes very complicated set of civilizations,
cultures, and sub-cultures--which begin to appear as history
proceeds--were in themselves very complicated.
Hence, I selected the case of Britain as a single example of how
prehistory ends in Europe. It could have been more complicated than we
found it to be. Even in the subject matter on Britain in the chapter
before the last, we did not see direct traces of the effect on Britain
of the very important developments which took place in the Danubian
way from the Near East. Apparently Britain was not affected. Britain
received the impulses which brought copper, bronze, and iron tools from
an original east Mediterranean homeland into Europe, almost at the ends
of their journeys. But by the same token, they had had time en route to
take on their characteristic European aspects.
Some time ago, Sir Cyril Fox wrote a famous book called _The
Personality of Britain_, sub-titled Its Influence on Inhabitant and
Invader in Prehistoric and Early Historic Times. We have not gone
into the post-Roman early historic period here; there are still the
Anglo-Saxons and Normans to account for as well as the effects of
the Romans. But what I have tried to do was to begin the story of
how the personality of Britain was formed. The principles that Fox
used, in trying to balance cultural and environmental factors and
interrelationships would not be greatly different for other lands.
Summary
[Illustration]
In the pages you have read so far, you have been brought through the
earliest 99 per cent of the story of mans life on this planet. I have
left only 1 per cent of the story for the historians to tell.
THE DRAMA OF THE PAST
Men first became men when evolution had carried them to a certain
point. This was the point where the eye-hand-brain co-ordination was
good enough so that tools could be made. When tools began to be made
according to sets of lasting habits, we know that men had appeared.
This happened over a half million years ago. The stage for the play
may have been as broad as all of Europe, Africa, and Asia. At least,
it seems unlikely that it was only one little region that saw the
beginning of the drama.
Glaciers and different climates came and went, to change the settings.
But the play went on in the same first act for a very long time. The
men who were the players had simple roles. They had to feed themselves
and protect themselves as best they could. They did this by hunting,
catching, and finding food wherever they could, and by taking such
protection as caves, fire, and their simple tools would give them.
Before the first act was over, the last of the glaciers was melting
away, and the players had added the New World to their stage. If
we want a special name for the first act, we could call it _The
Food-Gatherers_.
There were not many climaxes in the first act, so far as we can see.
But I think there may have been a few. Certainly the pace of the
first act accelerated with the swing from simple gathering to more
intensified collecting. The great cave art of France and Spain was
probably an expression of a climax. Even the ideas of burying the dead
and of the Venus figurines must also point to levels of human thought
and activity that were over and above pure food-getting.
THE SECOND ACT
The second act began only about ten thousand years ago. A few of the
players started it by themselves near the center of the Old World part
of the stage, in the Near East. It began as a plant and animal act, but
it soon became much more complicated.
But the players in this one part of the stage--in the Near East--were
not the only ones to start off on the second act by themselves. Other
players, possibly in several places in the Far East, and certainly in
the New World, also started second acts that began as plant and animal
acts, and then became complicated. We can call the whole second act
_The Food-Producers_.
THE FIRST GREAT CLIMAX OF THE SECOND ACT
In the Near East, the first marked climax of the second act happened
in Mesopotamia and Egypt. The play and the players reached that great
climax that we call civilization. This seems to have come less than
five thousand years after the second act began. But it could never have
happened in the first act at all.
There is another curious thing about the first act. Many of the players
didnt know it was over and they kept on with their roles long after
the second act had begun. On the edges of the stage there are today
some players who are still going on with the first act. The Eskimos,
and the native Australians, and certain tribes in the Amazon jungle are
some of these players. They seem perfectly happy to keep on with the
first act.
The second act moved from climax to climax. The civilizations of
Mesopotamia and Egypt were only the earliest of these climaxes. The
players to the west caught the spirit of the thing, and climaxes
followed there. So also did climaxes come in the Far Eastern and New
World portions of the stage.
The greater part of the second act should really be described to you
by a historian. Although it was a very short act when compared to the
first one, the climaxes complicate it a great deal. I, a prehistorian,
have told you about only the first act, and the very beginning of the
second.
THE THIRD ACT
Also, as a prehistorian I probably should not even mention the third
act--it began so recently. The third act is _The Industrialization_.
It is the one in which we ourselves are players. If the pace of the
second act was so much faster than that of the first, the pace of the
third act is terrific. The danger is that it may wear down the players
completely.
What sort of climaxes will the third act have, and are we already in
one? You have seen by now that the acts of my play are given in terms
of modes or basic patterns of human economy--ways in which people
get food and protection and safety. The climaxes involve more than
human economy. Economics and technological factors may be part of the
climaxes, but they are not all. The climaxes may be revolutions in
their own way, intellectual and social revolutions if you like.
If the third act follows the pattern of the second act, a climax should
come soon after the act begins. We may be due for one soon if we are
not already in it. Remember the terrific pace of this third act.
WHY BOTHER WITH PREHISTORY?
Why do we bother about prehistory? The main reason is that we think it
may point to useful ideas for the present. We are in the troublesome
beginnings of the third act of the play. The beginnings of the second
act may have lessons for us and give depth to our thinking. I know
there are at least _some_ lessons, even in the present incomplete
state of our knowledge. The players who began the second act--that of
food-production--separately, in different parts of the world, were not
all of one pure race nor did they have pure cultural traditions.
Some apparently quite mixed Mediterraneans got off to the first start
on the second act and brought it to its first two climaxes as well.
Peoples of quite different physical type achieved the first climaxes in
China and in the New World.
In our British example of how the late prehistory of Europe worked, we
listed a continuous series of invasions and reverberations. After
each of these came fusion. Even though the Channel protected Britain
from some of the extreme complications of the mixture and fusion of
continental Europe, you can see how silly it would be to refer to a
pure British race or a pure British culture. We speak of the United
States as a melting pot. But this is nothing new. Actually, Britain
and all the rest of the world have been melting pots at one time or
another.
By the time the written records of Mesopotamia and Egypt begin to turn
up in number, the climaxes there are well under way. To understand the
beginnings of the climaxes, and the real beginnings of the second act
itself, we are thrown back on prehistoric archeology. And this is as
true for China, India, Middle America, and the Andes, as it is for the
Near East.
There are lessons to be learned from all of mans past, not simply
lessons of how to fight battles or win peace conferences, but of how
human society evolves from one stage to another. Many of these lessons
can only be looked for in the prehistoric past. So far, we have only
made a beginning. There is much still to do, and many gaps in the story
are yet to be filled. The prehistorians job is to find the evidence,
to fill the gaps, and to discover the lessons men have learned in the
past. As I see it, this is not only an exciting but a very practical
goal for which to strive.
List of Books
BOOKS OF GENERAL INTEREST
(Chosen from a variety of the increasingly useful list of cheap
paperbound books.)
Childe, V. Gordon
_What Happened in History._ 1954. Penguin.
_Man Makes Himself._ 1955. Mentor.
_The Prehistory of European Society._ 1958. Penguin.
Dunn, L. C., and Dobzhansky, Th.
_Heredity, Race, and Society._ 1952. Mentor.
Frankfort, Henri, Frankfort, H. A., Jacobsen, Thorkild, and Wilson,
John A.
_Before Philosophy._ 1954. Penguin.
Simpson, George G.
_The Meaning of Evolution._ 1955. Mentor.
Wheeler, Sir Mortimer
_Archaeology from the Earth._ 1956. Penguin.
GEOCHRONOLOGY AND THE ICE AGE
(Two general books. Some Pleistocene geologists disagree with Zeuners
interpretation of the dating evidence, but their points of view appear
in professional journals, in articles too cumbersome to list here.)
Flint, R. F.
_Glacial Geology and the Pleistocene Epoch._ 1947. John Wiley
and Sons.
Zeuner, F. E.
_Dating the Past._ 1952 (3rd ed.). Methuen and Co.
FOSSIL MEN AND RACE
(The points of view of physical anthropologists and human
paleontologists are changing very quickly. Two of the different points
of view are listed here.)
Clark, W. E. Le Gros
_History of the Primates._ 1956 (5th ed.). British Museum
(Natural History). (Also in Phoenix edition, 1957.)
Howells, W. W.
_Mankind So Far._ 1944. Doubleday, Doran.
GENERAL ANTHROPOLOGY
(These are standard texts not absolutely up to date in every detail, or
interpretative essays concerned with cultural change through time as
well as in space.)
Kroeber, A. L.
_Anthropology._ 1948. Harcourt, Brace.
Linton, Ralph
_The Tree of Culture._ 1955. Alfred A. Knopf, Inc.
Redfield, Robert
_The Primitive World and Its Transformations._ 1953. Cornell
University Press.
Steward, Julian H.
_Theory of Culture Change._ 1955. University of Illinois Press.
White, Leslie
_The Science of Culture._ 1949. Farrar, Strauss.
GENERAL PREHISTORY
(A sampling of the more useful and current standard works in English.)
Childe, V. Gordon
_The Dawn of European Civilization._ 1957. Kegan Paul, Trench,
Trubner.
_Prehistoric Migrations in Europe._ 1950. Instituttet for
Sammenlignende Kulturforskning.
Clark, Grahame
_Archaeology and Society._ 1957. Harvard University Press.
Clark, J. G. D.
_Prehistoric Europe: The Economic Basis._ 1952. Methuen and Co.
Garrod, D. A. E.
_Environment, Tools, and Man._ 1946. Cambridge University
Press.
Movius, Hallam L., Jr.
Old World Prehistory: Paleolithic in _Anthropology Today_.
Kroeber, A. L., ed. 1953. University of Chicago Press.
Oakley, Kenneth P.
_Man the Tool-Maker._ 1956. British Museum (Natural History).
(Also in Phoenix edition, 1957.)
Piggott, Stuart
_British Prehistory._ 1949. Oxford University Press.
Pittioni, Richard
_Die Urgeschichtlichen Grundlagen der Europischen Kultur._
1949. Deuticke. (A single book which does attempt to cover the
whole range of European prehistory to ca. 1 A.D.)
THE NEAR EAST
Adams, Robert M.
Developmental Stages in Ancient Mesopotamia, _in_ Steward,
Julian, _et al_, _Irrigation Civilizations: A Comparative
Study_. 1955. Pan American Union.
Braidwood, Robert J.
_The Near East and the Foundations for Civilization._ 1952.
University of Oregon.
Childe, V. Gordon
_New Light on the Most Ancient East._ 1952. Oriental Dept.,
Routledge and Kegan Paul.
Frankfort, Henri
_The Birth of Civilization in the Near East._ 1951. University
of Indiana Press. (Also in Anchor edition, 1956.)
Pallis, Svend A.
_The Antiquity of Iraq._ 1956. Munksgaard.
Wilson, John A.
_The Burden of Egypt._ 1951. University of Chicago Press. (Also
in Phoenix edition, called _The Culture of Ancient Egypt_,
1956.)
HOW DIGGING IS DONE
Braidwood, Linda
_Digging beyond the Tigris._ 1953. Schuman, New York.
Wheeler, Sir Mortimer
_Archaeology from the Earth._ 1954. Oxford, London.
Index
Abbevillian, 48;
core-biface tool, 44, 48
Acheulean, 48, 60
Acheuleo-Levalloisian, 63
Acheuleo-Mousterian, 63
Adams, R. M., 106
Adzes, 45
Africa, east, 67, 89;
north, 70, 89;
south, 22, 25, 34, 40, 67
Agriculture, incipient, in England, 140;
in Near East, 123
Ain Hanech, 48
Amber, taken from Baltic to Greece, 167
American Indians, 90, 142
Anatolia, used as route to Europe, 138
Animals, in caves, 54, 64;
in cave art, 85
Antevs, Ernst, 19
Anyathian, 47
Archeological interpretation, 8
Archeology, defined, 8
Architecture, at Jarmo, 128;
at Jericho, 133
Arrow, points, 94;
shaft straightener, 83
Art, in caves, 84;
East Spanish, 85;
figurines, 84;
Franco-Cantabrian, 84, 85;
movable (engravings, modeling, scratchings), 83;
painting, 83;
sculpture, 83
Asia, western, 67
Assemblage, defined, 13, 14;
European, 94;
Jarmo, 129;
Maglemosian, 94;
Natufian, 113
Aterian, industry, 67;
point, 89
Australopithecinae, 24
Australopithecine, 25, 26
Awls, 77
Axes, 62, 94
Ax-heads, 15
Azilian, 97
Aztecs, 145
Baghouz, 152
Bakun, 134
Baltic sea, 93
Banana, 107
Barley, wild, 108
Barrow, 141
Battle-axe folk, 164;
assemblage, 164
Beads, 80;
bone, 114
Beaker folk, 164;
assemblage, 164-165
Bear, in cave art, 85;
cult, 68
Belgium, 94
Belt cave, 126
Bering Strait, used as route to New World, 98
Bison, in cave art, 85
Blade, awl, 77;
backed, 75;
blade-core, 71;
end-scraper, 77;
stone, defined, 71;
strangulated (notched), 76;
tanged point, 76;
tools, 71, 75-80, 90;
tool tradition, 70
Boar, wild, in cave art, 85
Bogs, source of archeological materials, 94
Bolas, 54
Bordes, Franois, 62
Borer, 77
Boskop skull, 34
Boyd, William C., 35
Bracelets, 118
Brain, development of, 24
Breadfruit, 107
Breasted, James H., 107
Brick, at Jericho, 133
Britain, 94;
late prehistory, 163-175;
invaders, 173
Broch, 172
Buffalo, in China, 54;
killed by stampede, 86
Burials, 66, 86;
in henges, 164;
in urns, 168
Burins, 75
Burma, 90
Byblos, 134
Camel, 54
Cannibalism, 55
Cattle, wild, 85, 112;
in cave art, 85;
domesticated, 15;
at Skara Brae, 142
Caucasoids, 34
Cave men, 29
Caves, 62;
art in, 84
Celts, 170
Chariot, 160
Chicken, domestication of, 107
Chiefs, in food-gathering groups, 68
Childe, V. Gordon, 8
China, 136
Choukoutien, 28, 35
Choukoutienian, 47
Civilization, beginnings, 144, 149, 157;
meaning of, 144
Clactonian, 45, 47
Clay, used in modeling, 128;
baked, used for tools, 153
Club-heads, 82, 94
Colonization, in America, 142;
in Europe, 142
Combe Capelle, 30
Combe Capelle-Brnn group, 34
Commont, Victor, 51
Coon, Carlton S., 73
Copper, 134
Corn, in America, 145
Corrals for cattle, 140
Cradle of mankind, 136
Cremation, 167
Crete, 162
Cro-Magnon, 30, 34
Cultivation, incipient, 105, 109, 111
Culture, change, 99;
characteristics, defined, 38, 49;
prehistoric, 39
Danube Valley, used as route from Asia, 138
Dates, 153
Deer, 54, 96
Dog, domesticated, 96
Domestication, of animals, 100, 105, 107;
of plants, 100
Dragon teeth fossils in China, 28
Drill, 77
Dubois, Eugene, 26
Early Dynastic Period, Mesopotamia, 147
East Spanish art, 72, 85
Egypt, 70, 126
Ehringsdorf, 31
Elephant, 54
Emiliani, Cesare, 18
Emiran flake point, 73
England, 163-168;
prehistoric, 19, 40;
farmers in, 140
Eoanthropus dawsoni, 29
Eoliths, 41
Erich, 152
Eridu, 152
Euphrates River, floods in, 148
Europe, cave dwellings, 58;
at end of Ice Age, 93;
early farmers, 140;
glaciers in, 40;
huts in, 86;
routes into, 137-140;
spread of food-production to, 136
Far East, 69, 90
Farmers, 103
Fauresmith industry, 67
Fayum, 135;
radiocarbon date, 146
Fertile Crescent, 107, 146
Figurines, Venus, 84;
at Jarmo, 128;
at Ubaid, 153
Fire, used by Peking man, 54
First Dynasty, Egypt, 147
Fish-hooks, 80, 94
Fishing, 80;
by food-producers, 122
Fish-lines, 80
Fish spears, 94
Flint industry, 127
Fontchevade, 32, 56, 58
Food-collecting, 104, 121;
end of, 104
Food-gatherers, 53, 176
Food-gathering, 99, 104;
in Old World, 104;
stages of, 104
Food-producers, 176
Food-producing economy, 122;
in America, 145;
in Asia, 105
Food-producing revolution, 99, 105;
causes of, 101;
preconditions for, 100
Food-production, beginnings of, 99;
carried to Europe, 110
Food-vessel folk, 164
Forest folk, 97, 98, 104, 110
Fox, Sir Cyril, 174
France, caves in, 56
Galley Hill (fossil type), 29
Garrod, D. A., 73
Gazelle, 114
Germany, 94
Ghassul, 156
Glaciers, 18, 30;
destruction by, 40
Goat, wild, 108;
domesticated, 128
Grain, first planted, 20
Graves, passage, 141;
gallery, 141
Greece, civilization in, 163;
as route to western Europe, 138;
towns in, 162
Grimaldi skeletons, 34
Hackberry seeds used as food, 55
Halaf, 151;
assemblage, 151
Hallstatt, tradition, 169
Hand, development of, 24, 25
Hand adzes, 46
Hand axes, 44
Harpoons, antler, 83, 94;
bone, 82, 94
Hassuna, 131;
assemblage, 131, 132
Heidelberg, fossil type, 28
Hill-forts, in England, 171;
in Scotland, 172
Hilly flanks of Near East, 107, 108, 125, 131, 146, 147
History, beginning of, 7, 17
Hoes, 112
Holland, 164
Homo sapiens, 32
Hooton, E. A., 34
Horse, 112;
wild, in cave art, 85;
in China, 54
Hotu cave, 126
Houses, 122;
at Jarmo, 128;
at Halaf, 151
Howe, Bruce, 116
Howell, F. Clark, 30
Hunting, 93
Ice Age, in Asia, 99;
beginning of, 18;
glaciers in, 41;
last glaciation, 93
Incas, 145
India, 90, 136
Industrialization, 178
Industry, blade-tool, 88;
defined, 58;
ground stone, 94
Internationalism, 162
Iran, 107, 147
Iraq, 107, 124, 127, 136, 147
Iron, introduction of, 170
Irrigation, 123, 149, 155
Italy, 138
Jacobsen, T. J., 157
Jarmo, 109, 126, 128, 130;
assemblage, 129
Java, 23, 29
Java man, 26, 27, 29
Jefferson, Thomas, 11
Jericho, 119, 133
Judaidah, 134
Kafuan, 48
Kanam, 23, 36
Karim Shahir, 116-119, 124;
assemblage, 116, 117
Keith, Sir Arthur, 33
Kelley, Harper, 51
Kharga, 126
Khartoum, 136
Knives, 80
Krogman, W. M., 3, 25
Lamps, 85
Land bridges in Mediterranean, 19
La Tne phase, 170
Laurel leaf point, 78, 89
Leakey, L. S. B., 40
Le Moustier, 57
Levalloisian, 47, 61, 62
Levalloiso-Mousterian, 47, 63
Little Woodbury, 170
Magic, used by hunters, 123
Maglemosian, assemblage, 94, 95;
folk, 98
Makapan, 40
Mammoth, 93;
in cave art, 85
Man-apes, 26
Mango, 107
Mankind, age, 17
Maringer, J., 45
Markets, 155
Marston, A. T., 11
Mathiassen, T., 97
McCown, T. D., 33
Meganthropus, 26, 27, 36
Men, defined, 25;
modern, 32
Merimde, 135
Mersin, 133
Metal-workers, 160, 163, 167, 172
Micoquian, 48, 60
Microliths, 87;
at Jarmo, 130;
lunates, 87;
trapezoids, 87;
triangles, 87
Minerals used as coloring matter, 66
Mine-shafts, 140
Mlefaat, 126, 127
Mongoloids, 29, 90
Mortars, 114, 118, 127
Mounds, how formed, 12
Mount Carmel, 11, 33, 52, 59, 64, 69, 113, 114
Mousterian man, 64
Mousterian tools, 61, 62;
of Acheulean tradition, 62
Movius, H. L., 47
Natufian, animals in, 114;
assemblage, 113, 114, 115;
burials, 114;
date of, 113
Neanderthal man, 29, 30, 31, 56
Near East, beginnings of civilization in, 20, 144;
cave sites, 58;
climate in Ice Age, 99;
Fertile Crescent, 107, 146;
food-production in, 99;
Natufian assemblage in, 113-115;
stone tools, 114
Needles, 80
Negroid, 34
New World, 90
Nile River valley, 102, 134;
floods in, 148
Nuclear area, 106, 110;
in Near East, 107
Obsidian, used for blade tools, 71;
at Jarmo, 130
Ochre, red, with burials, 86
Oldowan, 48
Old World, 67, 70, 90;
continental phases in, 18
Olorgesailie, 40, 51
Ostrich, in China, 54
Ovens, 128
Oxygen isotopes, 18
Paintings in caves, 83
Paleoanthropic man, 50
Palestine, burials, 56;
cave sites, 52;
types of man, 69
Parpallo, 89
Patjitanian, 45, 47
Pebble tools, 42
Peking cave, 54;
animals in, 54
Peking man, 27, 28, 29, 54, 58
Pendants, 80;
bone, 114
Pestle, 114
Peterborough, 141;
assemblage, 141
Pictographic signs, 158
Pig, wild, 108
Piltdown man, 29
Pins, 80
Pithecanthropus, 26, 27, 30, 36
Pleistocene, 18, 25
Plows developed, 123
Points, arrow, 76;
laurel leaf, 78;
shouldered, 78, 79;
split-based bone, 80, 82;
tanged, 76;
willow leaf, 78
Potatoes, in America, 145
Pottery, 122, 130, 156;
decorated, 142;
painted, 131, 151, 152;
Susa style, 156;
in tombs, 141
Prehistory, defined, 7;
range of, 18
Pre-neanderthaloids, 30, 31, 37
Pre-Solutrean point, 89
Pre-Stellenbosch, 48
Proto-Literate assemblage, 157-160
Race, 35;
biological, 36;
pure, 16
Radioactivity, 9, 10
Radioactive carbon dates, 18, 92, 120, 130, 135, 156
Redfield, Robert, 38, 49
Reed, C. A., 128
Reindeer, 94
Rhinoceros, 93;
in cave art, 85
Rhodesian man, 32
Riss glaciation, 58
Rock-shelters, 58;
art in, 85
Saccopastore, 31
Sahara Desert, 34, 102
Samarra, 152;
pottery, 131, 152
Sangoan industry, 67
Sauer, Carl, 136
Sbaikian point, 89
Schliemann, H., 11, 12
Scotland, 171
Scraper, flake, 79;
end-scraper on blade, 77, 78;
keel-shaped, 79, 80, 81
Sculpture in caves, 83
Sebilian III, 126
Shaheinab, 135
Sheep, wild, 108;
at Skara Brae, 142;
in China, 54
Shellfish, 142
Ship, Ubaidian, 153
Sialk, 126, 134;
assemblage, 134
Siberia, 88;
pathway to New World, 98
Sickle, 112, 153;
blade, 113, 130
Silo, 122
Sinanthropus, 27, 30, 35
Skara Brae, 142
Snails used as food, 128
Soan, 47
Solecki, R., 116
Solo (fossil type), 29, 32
Solutrean industry, 77
Spear, shaft, 78;
thrower, 82, 83
Speech, development of organs of, 25
Squash, in America, 145
Steinheim fossil skull, 28
Stillbay industry, 67
Stonehenge, 166
Stratification, in caves, 12, 57;
in sites, 12
Swanscombe (fossil type), 11, 28
Syria, 107
Tabun, 60, 71
Tardenoisian, 97
Taro, 107
Tasa, 135
Tayacian, 47, 59
Teeth, pierced, in beads and pendants, 114
Temples, 123, 155
Tepe Gawra, 156
Ternafine, 29
Teshik Tash, 69
Textiles, 122
Thong-stropper, 80
Tigris River, floods in, 148
Toggle, 80
Tomatoes, in America, 145
Tombs, megalithic, 141
Tool-making, 42, 49
Tool-preparation traditions, 65
Tools, 62;
antler, 80;
blade, 70, 71, 75;
bone, 66;
chopper, 47;
core-biface, 43, 48, 60, 61;
flake, 44, 47, 51, 60, 64;
flint, 80, 127;
ground stone, 68, 127;
handles, 94;
pebble, 42, 43, 48, 53;
use of, 24
Touf (mud wall), 128
Toynbee, A. J., 101
Trade, 130, 155, 162
Traders, 167
Traditions, 15;
blade tool, 70;
definition of, 51;
interpretation of, 49;
tool-making, 42, 48;
chopper-tool, 47;
chopper-chopping tool, 45;
core-biface, 43, 48;
flake, 44, 47;
pebble tool, 42, 48
Tool-making, prehistory of, 42
Turkey, 107, 108
Ubaid, 153;
assemblage, 153-155
Urnfields, 168, 169
Village-farming community era, 105, 119
Wad B, 72
Wadjak, 34
Warka phase, 156;
assemblage, 156
Washburn, Sherwood L., 36
Water buffalo, domestication of, 107
Weidenreich, F., 29, 34
Wessex, 166, 167
Wheat, wild, 108;
partially domesticated, 127
Willow leaf point, 78
Windmill Hill, 138;
assemblage, 138, 140
Witch doctors, 68
Wool, 112;
in garments, 167
Writing, 158;
cuneiform, 158
Wrm I glaciation, 58
Zebu cattle, domestication of, 107
Zeuner, F. E., 73
* * * * * *
Transcribers note:
Punctuation, hyphenation, and spelling were made consistent when a
predominant preference was found in this book; otherwise they were not
changed.
Simple typographical errors were corrected; occasional unbalanced
quotation marks retained.
Ambiguous hyphens at the ends of lines were retained.
Index not checked for proper alphabetization or correct page references.
In the original book, chapter headings were accompanied by
illustrations, sometimes above, sometimes below, and sometimes
adjacent. In this eBook those ilustrations always appear below the
headings.
***END OF THE PROJECT GUTENBERG EBOOK PREHISTORIC MEN***
******* This file should be named 52664-0.txt or 52664-0.zip *******
This and all associated files of various formats will be found in:
http://www.gutenberg.org/dirs/5/2/6/6/52664
Updated editions will replace the previous one--the old editions will
be renamed.
Creating the works from print editions not protected by U.S. copyright
law means that no one owns a United States copyright in these works,
so the Foundation (and you!) can copy and distribute it in the United
States without permission and without paying copyright
royalties. Special rules, set forth in the General Terms of Use part
of this license, apply to copying and distributing Project
Gutenberg-tm electronic works to protect the PROJECT GUTENBERG-tm
concept and trademark. Project Gutenberg is a registered trademark,
and may not be used if you charge for the eBooks, unless you receive
specific permission. If you do not charge anything for copies of this
eBook, complying with the rules is very easy. You may use this eBook
for nearly any purpose such as creation of derivative works, reports,
performances and research. They may be modified and printed and given
away--you may do practically ANYTHING in the United States with eBooks
not protected by U.S. copyright law. Redistribution is subject to the
trademark license, especially commercial redistribution.
START: FULL LICENSE
THE FULL PROJECT GUTENBERG LICENSE
PLEASE READ THIS BEFORE YOU DISTRIBUTE OR USE THIS WORK
To protect the Project Gutenberg-tm mission of promoting the free
distribution of electronic works, by using or distributing this work
(or any other work associated in any way with the phrase "Project
Gutenberg"), you agree to comply with all the terms of the Full
Project Gutenberg-tm License available with this file or online at
www.gutenberg.org/license.
Section 1. General Terms of Use and Redistributing Project
Gutenberg-tm electronic works
1.A. By reading or using any part of this Project Gutenberg-tm
electronic work, you indicate that you have read, understand, agree to
and accept all the terms of this license and intellectual property
(trademark/copyright) agreement. If you do not agree to abide by all
the terms of this agreement, you must cease using and return or
destroy all copies of Project Gutenberg-tm electronic works in your
possession. If you paid a fee for obtaining a copy of or access to a
Project Gutenberg-tm electronic work and you do not agree to be bound
by the terms of this agreement, you may obtain a refund from the
person or entity to whom you paid the fee as set forth in paragraph
1.E.8.
1.B. "Project Gutenberg" is a registered trademark. It may only be
used on or associated in any way with an electronic work by people who
agree to be bound by the terms of this agreement. There are a few
things that you can do with most Project Gutenberg-tm electronic works
even without complying with the full terms of this agreement. See
paragraph 1.C below. There are a lot of things you can do with Project
Gutenberg-tm electronic works if you follow the terms of this
agreement and help preserve free future access to Project Gutenberg-tm
electronic works. See paragraph 1.E below.
1.C. The Project Gutenberg Literary Archive Foundation ("the
Foundation" or PGLAF), owns a compilation copyright in the collection
of Project Gutenberg-tm electronic works. Nearly all the individual
works in the collection are in the public domain in the United
States. If an individual work is unprotected by copyright law in the
United States and you are located in the United States, we do not
claim a right to prevent you from copying, distributing, performing,
displaying or creating derivative works based on the work as long as
all references to Project Gutenberg are removed. Of course, we hope
that you will support the Project Gutenberg-tm mission of promoting
free access to electronic works by freely sharing Project Gutenberg-tm
works in compliance with the terms of this agreement for keeping the
Project Gutenberg-tm name associated with the work. You can easily
comply with the terms of this agreement by keeping this work in the
same format with its attached full Project Gutenberg-tm License when
you share it without charge with others.
1.D. The copyright laws of the place where you are located also govern
what you can do with this work. Copyright laws in most countries are
in a constant state of change. If you are outside the United States,
check the laws of your country in addition to the terms of this
agreement before downloading, copying, displaying, performing,
distributing or creating derivative works based on this work or any
other Project Gutenberg-tm work. The Foundation makes no
representations concerning the copyright status of any work in any
country outside the United States.
1.E. Unless you have removed all references to Project Gutenberg:
1.E.1. The following sentence, with active links to, or other
immediate access to, the full Project Gutenberg-tm License must appear
prominently whenever any copy of a Project Gutenberg-tm work (any work
on which the phrase "Project Gutenberg" appears, or with which the
phrase "Project Gutenberg" is associated) is accessed, displayed,
performed, viewed, copied or distributed:
This eBook is for the use of anyone anywhere in the United States and
most other parts of the world at no cost and with almost no
restrictions whatsoever. You may copy it, give it away or re-use it
under the terms of the Project Gutenberg License included with this
eBook or online at www.gutenberg.org. If you are not located in the
United States, you'll have to check the laws of the country where you
are located before using this ebook.
1.E.2. If an individual Project Gutenberg-tm electronic work is
derived from texts not protected by U.S. copyright law (does not
contain a notice indicating that it is posted with permission of the
copyright holder), the work can be copied and distributed to anyone in
the United States without paying any fees or charges. If you are
redistributing or providing access to a work with the phrase "Project
Gutenberg" associated with or appearing on the work, you must comply
either with the requirements of paragraphs 1.E.1 through 1.E.7 or
obtain permission for the use of the work and the Project Gutenberg-tm
trademark as set forth in paragraphs 1.E.8 or 1.E.9.
1.E.3. If an individual Project Gutenberg-tm electronic work is posted
with the permission of the copyright holder, your use and distribution
must comply with both paragraphs 1.E.1 through 1.E.7 and any
additional terms imposed by the copyright holder. Additional terms
will be linked to the Project Gutenberg-tm License for all works
posted with the permission of the copyright holder found at the
beginning of this work.
1.E.4. Do not unlink or detach or remove the full Project Gutenberg-tm
License terms from this work, or any files containing a part of this
work or any other work associated with Project Gutenberg-tm.
1.E.5. Do not copy, display, perform, distribute or redistribute this
electronic work, or any part of this electronic work, without
prominently displaying the sentence set forth in paragraph 1.E.1 with
active links or immediate access to the full terms of the Project
Gutenberg-tm License.
1.E.6. You may convert to and distribute this work in any binary,
compressed, marked up, nonproprietary or proprietary form, including
any word processing or hypertext form. However, if you provide access
to or distribute copies of a Project Gutenberg-tm work in a format
other than "Plain Vanilla ASCII" or other format used in the official
version posted on the official Project Gutenberg-tm web site
(www.gutenberg.org), you must, at no additional cost, fee or expense
to the user, provide a copy, a means of exporting a copy, or a means
of obtaining a copy upon request, of the work in its original "Plain
Vanilla ASCII" or other form. Any alternate format must include the
full Project Gutenberg-tm License as specified in paragraph 1.E.1.
1.E.7. Do not charge a fee for access to, viewing, displaying,
performing, copying or distributing any Project Gutenberg-tm works
unless you comply with paragraph 1.E.8 or 1.E.9.
1.E.8. You may charge a reasonable fee for copies of or providing
access to or distributing Project Gutenberg-tm electronic works
provided that
* You pay a royalty fee of 20% of the gross profits you derive from
the use of Project Gutenberg-tm works calculated using the method
you already use to calculate your applicable taxes. The fee is owed
to the owner of the Project Gutenberg-tm trademark, but he has
agreed to donate royalties under this paragraph to the Project
Gutenberg Literary Archive Foundation. Royalty payments must be paid
within 60 days following each date on which you prepare (or are
legally required to prepare) your periodic tax returns. Royalty
payments should be clearly marked as such and sent to the Project
Gutenberg Literary Archive Foundation at the address specified in
Section 4, "Information about donations to the Project Gutenberg
Literary Archive Foundation."
* You provide a full refund of any money paid by a user who notifies
you in writing (or by e-mail) within 30 days of receipt that s/he
does not agree to the terms of the full Project Gutenberg-tm
License. You must require such a user to return or destroy all
copies of the works possessed in a physical medium and discontinue
all use of and all access to other copies of Project Gutenberg-tm
works.
* You provide, in accordance with paragraph 1.F.3, a full refund of
any money paid for a work or a replacement copy, if a defect in the
electronic work is discovered and reported to you within 90 days of
receipt of the work.
* You comply with all other terms of this agreement for free
distribution of Project Gutenberg-tm works.
1.E.9. If you wish to charge a fee or distribute a Project
Gutenberg-tm electronic work or group of works on different terms than
are set forth in this agreement, you must obtain permission in writing
from both the Project Gutenberg Literary Archive Foundation and The
Project Gutenberg Trademark LLC, the owner of the Project Gutenberg-tm
trademark. Contact the Foundation as set forth in Section 3 below.
1.F.
1.F.1. Project Gutenberg volunteers and employees expend considerable
effort to identify, do copyright research on, transcribe and proofread
works not protected by U.S. copyright law in creating the Project
Gutenberg-tm collection. Despite these efforts, Project Gutenberg-tm
electronic works, and the medium on which they may be stored, may
contain "Defects," such as, but not limited to, incomplete, inaccurate
or corrupt data, transcription errors, a copyright or other
intellectual property infringement, a defective or damaged disk or
other medium, a computer virus, or computer codes that damage or
cannot be read by your equipment.
1.F.2. LIMITED WARRANTY, DISCLAIMER OF DAMAGES - Except for the "Right
of Replacement or Refund" described in paragraph 1.F.3, the Project
Gutenberg Literary Archive Foundation, the owner of the Project
Gutenberg-tm trademark, and any other party distributing a Project
Gutenberg-tm electronic work under this agreement, disclaim all
liability to you for damages, costs and expenses, including legal
fees. YOU AGREE THAT YOU HAVE NO REMEDIES FOR NEGLIGENCE, STRICT
LIABILITY, BREACH OF WARRANTY OR BREACH OF CONTRACT EXCEPT THOSE
PROVIDED IN PARAGRAPH 1.F.3. YOU AGREE THAT THE FOUNDATION, THE
TRADEMARK OWNER, AND ANY DISTRIBUTOR UNDER THIS AGREEMENT WILL NOT BE
LIABLE TO YOU FOR ACTUAL, DIRECT, INDIRECT, CONSEQUENTIAL, PUNITIVE OR
INCIDENTAL DAMAGES EVEN IF YOU GIVE NOTICE OF THE POSSIBILITY OF SUCH
DAMAGE.
1.F.3. LIMITED RIGHT OF REPLACEMENT OR REFUND - If you discover a
defect in this electronic work within 90 days of receiving it, you can
receive a refund of the money (if any) you paid for it by sending a
written explanation to the person you received the work from. If you
received the work on a physical medium, you must return the medium
with your written explanation. The person or entity that provided you
with the defective work may elect to provide a replacement copy in
lieu of a refund. If you received the work electronically, the person
or entity providing it to you may choose to give you a second
opportunity to receive the work electronically in lieu of a refund. If
the second copy is also defective, you may demand a refund in writing
without further opportunities to fix the problem.
1.F.4. Except for the limited right of replacement or refund set forth
in paragraph 1.F.3, this work is provided to you 'AS-IS', WITH NO
OTHER WARRANTIES OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT
LIMITED TO WARRANTIES OF MERCHANTABILITY OR FITNESS FOR ANY PURPOSE.
1.F.5. Some states do not allow disclaimers of certain implied
warranties or the exclusion or limitation of certain types of
damages. If any disclaimer or limitation set forth in this agreement
violates the law of the state applicable to this agreement, the
agreement shall be interpreted to make the maximum disclaimer or
limitation permitted by the applicable state law. The invalidity or
unenforceability of any provision of this agreement shall not void the
remaining provisions.
1.F.6. INDEMNITY - You agree to indemnify and hold the Foundation, the
trademark owner, any agent or employee of the Foundation, anyone
providing copies of Project Gutenberg-tm electronic works in
accordance with this agreement, and any volunteers associated with the
production, promotion and distribution of Project Gutenberg-tm
electronic works, harmless from all liability, costs and expenses,
including legal fees, that arise directly or indirectly from any of
the following which you do or cause to occur: (a) distribution of this
or any Project Gutenberg-tm work, (b) alteration, modification, or
additions or deletions to any Project Gutenberg-tm work, and (c) any
Defect you cause.
Section 2. Information about the Mission of Project Gutenberg-tm
Project Gutenberg-tm is synonymous with the free distribution of
electronic works in formats readable by the widest variety of
computers including obsolete, old, middle-aged and new computers. It
exists because of the efforts of hundreds of volunteers and donations
from people in all walks of life.
Volunteers and financial support to provide volunteers with the
assistance they need are critical to reaching Project Gutenberg-tm's
goals and ensuring that the Project Gutenberg-tm collection will
remain freely available for generations to come. In 2001, the Project
Gutenberg Literary Archive Foundation was created to provide a secure
and permanent future for Project Gutenberg-tm and future
generations. To learn more about the Project Gutenberg Literary
Archive Foundation and how your efforts and donations can help, see
Sections 3 and 4 and the Foundation information page at
www.gutenberg.org
Section 3. Information about the Project Gutenberg Literary
Archive Foundation
The Project Gutenberg Literary Archive Foundation is a non profit
501(c)(3) educational corporation organized under the laws of the
state of Mississippi and granted tax exempt status by the Internal
Revenue Service. The Foundation's EIN or federal tax identification
number is 64-6221541. Contributions to the Project Gutenberg Literary
Archive Foundation are tax deductible to the full extent permitted by
U.S. federal laws and your state's laws.
The Foundation's principal office is in Fairbanks, Alaska, with the
mailing address: PO Box 750175, Fairbanks, AK 99775, but its
volunteers and employees are scattered throughout numerous
locations. Its business office is located at 809 North 1500 West, Salt
Lake City, UT 84116, (801) 596-1887. Email contact links and up to
date contact information can be found at the Foundation's web site and
official page at www.gutenberg.org/contact
For additional contact information:
Dr. Gregory B. Newby
Chief Executive and Director
[email protected]
Section 4. Information about Donations to the Project Gutenberg
Literary Archive Foundation
Project Gutenberg-tm depends upon and cannot survive without wide
spread public support and donations to carry out its mission of
increasing the number of public domain and licensed works that can be
freely distributed in machine readable form accessible by the widest
array of equipment including outdated equipment. Many small donations
($1 to $5,000) are particularly important to maintaining tax exempt
status with the IRS.
The Foundation is committed to complying with the laws regulating
charities and charitable donations in all 50 states of the United
States. Compliance requirements are not uniform and it takes a
considerable effort, much paperwork and many fees to meet and keep up
with these requirements. We do not solicit donations in locations
where we have not received written confirmation of compliance. To SEND
DONATIONS or determine the status of compliance for any particular
state visit www.gutenberg.org/donate
While we cannot and do not solicit contributions from states where we
have not met the solicitation requirements, we know of no prohibition
against accepting unsolicited donations from donors in such states who
approach us with offers to donate.
International donations are gratefully accepted, but we cannot make
any statements concerning tax treatment of donations received from
outside the United States. U.S. laws alone swamp our small staff.
Please check the Project Gutenberg Web pages for current donation
methods and addresses. Donations are accepted in a number of other
ways including checks, online payments and credit card donations. To
donate, please visit: www.gutenberg.org/donate
Section 5. General Information About Project Gutenberg-tm electronic works.
Professor Michael S. Hart was the originator of the Project
Gutenberg-tm concept of a library of electronic works that could be
freely shared with anyone. For forty years, he produced and
distributed Project Gutenberg-tm eBooks with only a loose network of
volunteer support.
Project Gutenberg-tm eBooks are often created from several printed
editions, all of which are confirmed as not protected by copyright in
the U.S. unless a copyright notice is included. Thus, we do not
necessarily keep eBooks in compliance with any particular paper
edition.
Most people start at our Web site which has the main PG search
facility: www.gutenberg.org
This Web site includes information about Project Gutenberg-tm,
including how to make donations to the Project Gutenberg Literary
Archive Foundation, how to help produce our new eBooks, and how to
subscribe to our email newsletter to hear about new eBooks.
| -1 |
TheAlgorithms/Python | 9,576 | Upgrade to Python 3.12 | ### Describe your change:
Repeats #8777
Repeats #9306
* #8777
* #9306
**Disables** algorithms based on `qiskit` and `tensorflow` because those modules are not yet compatible with Python 3.12.
* https://github.com/tensorflow/tensorflow/releases
* https://github.com/Qiskit/qiskit/issues/10887
---
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
| cclauss | "2023-10-03T08:13:43Z" | "2023-10-03T09:17:11Z" | f964dcbf2ff7c70e4aca20532a38dfb02ce8a4c0 | 0f4e51245f33175b4fb311f633d3821210741bdd | Upgrade to Python 3.12. ### Describe your change:
Repeats #8777
Repeats #9306
* #8777
* #9306
**Disables** algorithms based on `qiskit` and `tensorflow` because those modules are not yet compatible with Python 3.12.
* https://github.com/tensorflow/tensorflow/releases
* https://github.com/Qiskit/qiskit/issues/10887
---
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
| -1 |
||
TheAlgorithms/Python | 9,576 | Upgrade to Python 3.12 | ### Describe your change:
Repeats #8777
Repeats #9306
* #8777
* #9306
**Disables** algorithms based on `qiskit` and `tensorflow` because those modules are not yet compatible with Python 3.12.
* https://github.com/tensorflow/tensorflow/releases
* https://github.com/Qiskit/qiskit/issues/10887
---
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
| cclauss | "2023-10-03T08:13:43Z" | "2023-10-03T09:17:11Z" | f964dcbf2ff7c70e4aca20532a38dfb02ce8a4c0 | 0f4e51245f33175b4fb311f633d3821210741bdd | Upgrade to Python 3.12. ### Describe your change:
Repeats #8777
Repeats #9306
* #8777
* #9306
**Disables** algorithms based on `qiskit` and `tensorflow` because those modules are not yet compatible with Python 3.12.
* https://github.com/tensorflow/tensorflow/releases
* https://github.com/Qiskit/qiskit/issues/10887
---
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
| """
Author : Alexander Pantyukhin
Date : November 2, 2022
Task:
Given the root of a binary tree, determine if it is a valid binary search
tree (BST).
A valid binary search tree is defined as follows:
- The left subtree of a node contains only nodes with keys less than the node's key.
- The right subtree of a node contains only nodes with keys greater than the node's key.
- Both the left and right subtrees must also be binary search trees.
Implementation notes:
Depth-first search approach.
leetcode: https://leetcode.com/problems/validate-binary-search-tree/
Let n is the number of nodes in tree
Runtime: O(n)
Space: O(1)
"""
from __future__ import annotations
from dataclasses import dataclass
@dataclass
class TreeNode:
data: float
left: TreeNode | None = None
right: TreeNode | None = None
def is_binary_search_tree(root: TreeNode | None) -> bool:
"""
>>> is_binary_search_tree(TreeNode(data=2,
... left=TreeNode(data=1),
... right=TreeNode(data=3))
... )
True
>>> is_binary_search_tree(TreeNode(data=0,
... left=TreeNode(data=-11),
... right=TreeNode(data=3))
... )
True
>>> is_binary_search_tree(TreeNode(data=5,
... left=TreeNode(data=1),
... right=TreeNode(data=4, left=TreeNode(data=3)))
... )
False
>>> is_binary_search_tree(TreeNode(data='a',
... left=TreeNode(data=1),
... right=TreeNode(data=4, left=TreeNode(data=3)))
... )
Traceback (most recent call last):
...
ValueError: Each node should be type of TreeNode and data should be float.
>>> is_binary_search_tree(TreeNode(data=2,
... left=TreeNode([]),
... right=TreeNode(data=4, left=TreeNode(data=3)))
... )
Traceback (most recent call last):
...
ValueError: Each node should be type of TreeNode and data should be float.
"""
# Validation
def is_valid_tree(node: TreeNode | None) -> bool:
"""
>>> is_valid_tree(None)
True
>>> is_valid_tree('abc')
False
>>> is_valid_tree(TreeNode(data='not a float'))
False
>>> is_valid_tree(TreeNode(data=1, left=TreeNode('123')))
False
"""
if node is None:
return True
if not isinstance(node, TreeNode):
return False
try:
float(node.data)
except (TypeError, ValueError):
return False
return is_valid_tree(node.left) and is_valid_tree(node.right)
if not is_valid_tree(root):
raise ValueError(
"Each node should be type of TreeNode and data should be float."
)
def is_binary_search_tree_recursive_check(
node: TreeNode | None, left_bound: float, right_bound: float
) -> bool:
"""
>>> is_binary_search_tree_recursive_check(None)
True
>>> is_binary_search_tree_recursive_check(TreeNode(data=1), 10, 20)
False
"""
if node is None:
return True
return (
left_bound < node.data < right_bound
and is_binary_search_tree_recursive_check(node.left, left_bound, node.data)
and is_binary_search_tree_recursive_check(
node.right, node.data, right_bound
)
)
return is_binary_search_tree_recursive_check(root, -float("inf"), float("inf"))
if __name__ == "__main__":
import doctest
doctest.testmod()
| """
Author : Alexander Pantyukhin
Date : November 2, 2022
Task:
Given the root of a binary tree, determine if it is a valid binary search
tree (BST).
A valid binary search tree is defined as follows:
- The left subtree of a node contains only nodes with keys less than the node's key.
- The right subtree of a node contains only nodes with keys greater than the node's key.
- Both the left and right subtrees must also be binary search trees.
Implementation notes:
Depth-first search approach.
leetcode: https://leetcode.com/problems/validate-binary-search-tree/
Let n is the number of nodes in tree
Runtime: O(n)
Space: O(1)
"""
from __future__ import annotations
from dataclasses import dataclass
@dataclass
class TreeNode:
data: float
left: TreeNode | None = None
right: TreeNode | None = None
def is_binary_search_tree(root: TreeNode | None) -> bool:
"""
>>> is_binary_search_tree(TreeNode(data=2,
... left=TreeNode(data=1),
... right=TreeNode(data=3))
... )
True
>>> is_binary_search_tree(TreeNode(data=0,
... left=TreeNode(data=-11),
... right=TreeNode(data=3))
... )
True
>>> is_binary_search_tree(TreeNode(data=5,
... left=TreeNode(data=1),
... right=TreeNode(data=4, left=TreeNode(data=3)))
... )
False
>>> is_binary_search_tree(TreeNode(data='a',
... left=TreeNode(data=1),
... right=TreeNode(data=4, left=TreeNode(data=3)))
... )
Traceback (most recent call last):
...
ValueError: Each node should be type of TreeNode and data should be float.
>>> is_binary_search_tree(TreeNode(data=2,
... left=TreeNode([]),
... right=TreeNode(data=4, left=TreeNode(data=3)))
... )
Traceback (most recent call last):
...
ValueError: Each node should be type of TreeNode and data should be float.
"""
# Validation
def is_valid_tree(node: TreeNode | None) -> bool:
"""
>>> is_valid_tree(None)
True
>>> is_valid_tree('abc')
False
>>> is_valid_tree(TreeNode(data='not a float'))
False
>>> is_valid_tree(TreeNode(data=1, left=TreeNode('123')))
False
"""
if node is None:
return True
if not isinstance(node, TreeNode):
return False
try:
float(node.data)
except (TypeError, ValueError):
return False
return is_valid_tree(node.left) and is_valid_tree(node.right)
if not is_valid_tree(root):
raise ValueError(
"Each node should be type of TreeNode and data should be float."
)
def is_binary_search_tree_recursive_check(
node: TreeNode | None, left_bound: float, right_bound: float
) -> bool:
"""
>>> is_binary_search_tree_recursive_check(None)
True
>>> is_binary_search_tree_recursive_check(TreeNode(data=1), 10, 20)
False
"""
if node is None:
return True
return (
left_bound < node.data < right_bound
and is_binary_search_tree_recursive_check(node.left, left_bound, node.data)
and is_binary_search_tree_recursive_check(
node.right, node.data, right_bound
)
)
return is_binary_search_tree_recursive_check(root, -float("inf"), float("inf"))
if __name__ == "__main__":
import doctest
doctest.testmod()
| -1 |
TheAlgorithms/Python | 9,576 | Upgrade to Python 3.12 | ### Describe your change:
Repeats #8777
Repeats #9306
* #8777
* #9306
**Disables** algorithms based on `qiskit` and `tensorflow` because those modules are not yet compatible with Python 3.12.
* https://github.com/tensorflow/tensorflow/releases
* https://github.com/Qiskit/qiskit/issues/10887
---
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
| cclauss | "2023-10-03T08:13:43Z" | "2023-10-03T09:17:11Z" | f964dcbf2ff7c70e4aca20532a38dfb02ce8a4c0 | 0f4e51245f33175b4fb311f633d3821210741bdd | Upgrade to Python 3.12. ### Describe your change:
Repeats #8777
Repeats #9306
* #8777
* #9306
**Disables** algorithms based on `qiskit` and `tensorflow` because those modules are not yet compatible with Python 3.12.
* https://github.com/tensorflow/tensorflow/releases
* https://github.com/Qiskit/qiskit/issues/10887
---
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
| """
pseudo-code
DIJKSTRA(graph G, start vertex s, destination vertex d):
//all nodes initially unexplored
1 - let H = min heap data structure, initialized with 0 and s [here 0 indicates
the distance from start vertex s]
2 - while H is non-empty:
3 - remove the first node and cost of H, call it U and cost
4 - if U has been previously explored:
5 - go to the while loop, line 2 //Once a node is explored there is no need
to make it again
6 - mark U as explored
7 - if U is d:
8 - return cost // total cost from start to destination vertex
9 - for each edge(U, V): c=cost of edge(U,V) // for V in graph[U]
10 - if V explored:
11 - go to next V in line 9
12 - total_cost = cost + c
13 - add (total_cost,V) to H
You can think at cost as a distance where Dijkstra finds the shortest distance
between vertices s and v in a graph G. The use of a min heap as H guarantees
that if a vertex has already been explored there will be no other path with
shortest distance, that happens because heapq.heappop will always return the
next vertex with the shortest distance, considering that the heap stores not
only the distance between previous vertex and current vertex but the entire
distance between each vertex that makes up the path from start vertex to target
vertex.
"""
import heapq
def dijkstra(graph, start, end):
"""Return the cost of the shortest path between vertices start and end.
>>> dijkstra(G, "E", "C")
6
>>> dijkstra(G2, "E", "F")
3
>>> dijkstra(G3, "E", "F")
3
"""
heap = [(0, start)] # cost from start node,end node
visited = set()
while heap:
(cost, u) = heapq.heappop(heap)
if u in visited:
continue
visited.add(u)
if u == end:
return cost
for v, c in graph[u]:
if v in visited:
continue
next_item = cost + c
heapq.heappush(heap, (next_item, v))
return -1
G = {
"A": [["B", 2], ["C", 5]],
"B": [["A", 2], ["D", 3], ["E", 1], ["F", 1]],
"C": [["A", 5], ["F", 3]],
"D": [["B", 3]],
"E": [["B", 4], ["F", 3]],
"F": [["C", 3], ["E", 3]],
}
r"""
Layout of G2:
E -- 1 --> B -- 1 --> C -- 1 --> D -- 1 --> F
\ /\
\ ||
----------------- 3 --------------------
"""
G2 = {
"B": [["C", 1]],
"C": [["D", 1]],
"D": [["F", 1]],
"E": [["B", 1], ["F", 3]],
"F": [],
}
r"""
Layout of G3:
E -- 1 --> B -- 1 --> C -- 1 --> D -- 1 --> F
\ /\
\ ||
-------- 2 ---------> G ------- 1 ------
"""
G3 = {
"B": [["C", 1]],
"C": [["D", 1]],
"D": [["F", 1]],
"E": [["B", 1], ["G", 2]],
"F": [],
"G": [["F", 1]],
}
short_distance = dijkstra(G, "E", "C")
print(short_distance) # E -- 3 --> F -- 3 --> C == 6
short_distance = dijkstra(G2, "E", "F")
print(short_distance) # E -- 3 --> F == 3
short_distance = dijkstra(G3, "E", "F")
print(short_distance) # E -- 2 --> G -- 1 --> F == 3
if __name__ == "__main__":
import doctest
doctest.testmod()
| """
pseudo-code
DIJKSTRA(graph G, start vertex s, destination vertex d):
//all nodes initially unexplored
1 - let H = min heap data structure, initialized with 0 and s [here 0 indicates
the distance from start vertex s]
2 - while H is non-empty:
3 - remove the first node and cost of H, call it U and cost
4 - if U has been previously explored:
5 - go to the while loop, line 2 //Once a node is explored there is no need
to make it again
6 - mark U as explored
7 - if U is d:
8 - return cost // total cost from start to destination vertex
9 - for each edge(U, V): c=cost of edge(U,V) // for V in graph[U]
10 - if V explored:
11 - go to next V in line 9
12 - total_cost = cost + c
13 - add (total_cost,V) to H
You can think at cost as a distance where Dijkstra finds the shortest distance
between vertices s and v in a graph G. The use of a min heap as H guarantees
that if a vertex has already been explored there will be no other path with
shortest distance, that happens because heapq.heappop will always return the
next vertex with the shortest distance, considering that the heap stores not
only the distance between previous vertex and current vertex but the entire
distance between each vertex that makes up the path from start vertex to target
vertex.
"""
import heapq
def dijkstra(graph, start, end):
"""Return the cost of the shortest path between vertices start and end.
>>> dijkstra(G, "E", "C")
6
>>> dijkstra(G2, "E", "F")
3
>>> dijkstra(G3, "E", "F")
3
"""
heap = [(0, start)] # cost from start node,end node
visited = set()
while heap:
(cost, u) = heapq.heappop(heap)
if u in visited:
continue
visited.add(u)
if u == end:
return cost
for v, c in graph[u]:
if v in visited:
continue
next_item = cost + c
heapq.heappush(heap, (next_item, v))
return -1
G = {
"A": [["B", 2], ["C", 5]],
"B": [["A", 2], ["D", 3], ["E", 1], ["F", 1]],
"C": [["A", 5], ["F", 3]],
"D": [["B", 3]],
"E": [["B", 4], ["F", 3]],
"F": [["C", 3], ["E", 3]],
}
r"""
Layout of G2:
E -- 1 --> B -- 1 --> C -- 1 --> D -- 1 --> F
\ /\
\ ||
----------------- 3 --------------------
"""
G2 = {
"B": [["C", 1]],
"C": [["D", 1]],
"D": [["F", 1]],
"E": [["B", 1], ["F", 3]],
"F": [],
}
r"""
Layout of G3:
E -- 1 --> B -- 1 --> C -- 1 --> D -- 1 --> F
\ /\
\ ||
-------- 2 ---------> G ------- 1 ------
"""
G3 = {
"B": [["C", 1]],
"C": [["D", 1]],
"D": [["F", 1]],
"E": [["B", 1], ["G", 2]],
"F": [],
"G": [["F", 1]],
}
short_distance = dijkstra(G, "E", "C")
print(short_distance) # E -- 3 --> F -- 3 --> C == 6
short_distance = dijkstra(G2, "E", "F")
print(short_distance) # E -- 3 --> F == 3
short_distance = dijkstra(G3, "E", "F")
print(short_distance) # E -- 2 --> G -- 1 --> F == 3
if __name__ == "__main__":
import doctest
doctest.testmod()
| -1 |
TheAlgorithms/Python | 9,576 | Upgrade to Python 3.12 | ### Describe your change:
Repeats #8777
Repeats #9306
* #8777
* #9306
**Disables** algorithms based on `qiskit` and `tensorflow` because those modules are not yet compatible with Python 3.12.
* https://github.com/tensorflow/tensorflow/releases
* https://github.com/Qiskit/qiskit/issues/10887
---
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
| cclauss | "2023-10-03T08:13:43Z" | "2023-10-03T09:17:11Z" | f964dcbf2ff7c70e4aca20532a38dfb02ce8a4c0 | 0f4e51245f33175b4fb311f633d3821210741bdd | Upgrade to Python 3.12. ### Describe your change:
Repeats #8777
Repeats #9306
* #8777
* #9306
**Disables** algorithms based on `qiskit` and `tensorflow` because those modules are not yet compatible with Python 3.12.
* https://github.com/tensorflow/tensorflow/releases
* https://github.com/Qiskit/qiskit/issues/10887
---
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
| """
This is a pure Python implementation of the Geometric Series algorithm
https://en.wikipedia.org/wiki/Geometric_series
Run the doctests with the following command:
python3 -m doctest -v geometric_series.py
or
python -m doctest -v geometric_series.py
For manual testing run:
python3 geometric_series.py
"""
from __future__ import annotations
def geometric_series(
nth_term: float,
start_term_a: float,
common_ratio_r: float,
) -> list[float]:
"""
Pure Python implementation of Geometric Series algorithm
:param nth_term: The last term (nth term of Geometric Series)
:param start_term_a : The first term of Geometric Series
:param common_ratio_r : The common ratio between all the terms
:return: The Geometric Series starting from first term a and multiple of common
ration with first term with increase in power till last term (nth term)
Examples:
>>> geometric_series(4, 2, 2)
[2, 4.0, 8.0, 16.0]
>>> geometric_series(4.0, 2.0, 2.0)
[2.0, 4.0, 8.0, 16.0]
>>> geometric_series(4.1, 2.1, 2.1)
[2.1, 4.41, 9.261000000000001, 19.448100000000004]
>>> geometric_series(4, 2, -2)
[2, -4.0, 8.0, -16.0]
>>> geometric_series(4, -2, 2)
[-2, -4.0, -8.0, -16.0]
>>> geometric_series(-4, 2, 2)
[]
>>> geometric_series(0, 100, 500)
[]
>>> geometric_series(1, 1, 1)
[1]
>>> geometric_series(0, 0, 0)
[]
"""
if not all((nth_term, start_term_a, common_ratio_r)):
return []
series: list[float] = []
power = 1
multiple = common_ratio_r
for _ in range(int(nth_term)):
if not series:
series.append(start_term_a)
else:
power += 1
series.append(float(start_term_a * multiple))
multiple = pow(float(common_ratio_r), power)
return series
if __name__ == "__main__":
import doctest
doctest.testmod()
nth_term = float(input("Enter the last number (n term) of the Geometric Series"))
start_term_a = float(input("Enter the starting term (a) of the Geometric Series"))
common_ratio_r = float(
input("Enter the common ratio between two terms (r) of the Geometric Series")
)
print("Formula of Geometric Series => a + ar + ar^2 ... +ar^n")
print(geometric_series(nth_term, start_term_a, common_ratio_r))
| """
This is a pure Python implementation of the Geometric Series algorithm
https://en.wikipedia.org/wiki/Geometric_series
Run the doctests with the following command:
python3 -m doctest -v geometric_series.py
or
python -m doctest -v geometric_series.py
For manual testing run:
python3 geometric_series.py
"""
from __future__ import annotations
def geometric_series(
nth_term: float,
start_term_a: float,
common_ratio_r: float,
) -> list[float]:
"""
Pure Python implementation of Geometric Series algorithm
:param nth_term: The last term (nth term of Geometric Series)
:param start_term_a : The first term of Geometric Series
:param common_ratio_r : The common ratio between all the terms
:return: The Geometric Series starting from first term a and multiple of common
ration with first term with increase in power till last term (nth term)
Examples:
>>> geometric_series(4, 2, 2)
[2, 4.0, 8.0, 16.0]
>>> geometric_series(4.0, 2.0, 2.0)
[2.0, 4.0, 8.0, 16.0]
>>> geometric_series(4.1, 2.1, 2.1)
[2.1, 4.41, 9.261000000000001, 19.448100000000004]
>>> geometric_series(4, 2, -2)
[2, -4.0, 8.0, -16.0]
>>> geometric_series(4, -2, 2)
[-2, -4.0, -8.0, -16.0]
>>> geometric_series(-4, 2, 2)
[]
>>> geometric_series(0, 100, 500)
[]
>>> geometric_series(1, 1, 1)
[1]
>>> geometric_series(0, 0, 0)
[]
"""
if not all((nth_term, start_term_a, common_ratio_r)):
return []
series: list[float] = []
power = 1
multiple = common_ratio_r
for _ in range(int(nth_term)):
if not series:
series.append(start_term_a)
else:
power += 1
series.append(float(start_term_a * multiple))
multiple = pow(float(common_ratio_r), power)
return series
if __name__ == "__main__":
import doctest
doctest.testmod()
nth_term = float(input("Enter the last number (n term) of the Geometric Series"))
start_term_a = float(input("Enter the starting term (a) of the Geometric Series"))
common_ratio_r = float(
input("Enter the common ratio between two terms (r) of the Geometric Series")
)
print("Formula of Geometric Series => a + ar + ar^2 ... +ar^n")
print(geometric_series(nth_term, start_term_a, common_ratio_r))
| -1 |
TheAlgorithms/Python | 9,576 | Upgrade to Python 3.12 | ### Describe your change:
Repeats #8777
Repeats #9306
* #8777
* #9306
**Disables** algorithms based on `qiskit` and `tensorflow` because those modules are not yet compatible with Python 3.12.
* https://github.com/tensorflow/tensorflow/releases
* https://github.com/Qiskit/qiskit/issues/10887
---
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
| cclauss | "2023-10-03T08:13:43Z" | "2023-10-03T09:17:11Z" | f964dcbf2ff7c70e4aca20532a38dfb02ce8a4c0 | 0f4e51245f33175b4fb311f633d3821210741bdd | Upgrade to Python 3.12. ### Describe your change:
Repeats #8777
Repeats #9306
* #8777
* #9306
**Disables** algorithms based on `qiskit` and `tensorflow` because those modules are not yet compatible with Python 3.12.
* https://github.com/tensorflow/tensorflow/releases
* https://github.com/Qiskit/qiskit/issues/10887
---
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
| # Check whether Graph is Bipartite or Not using BFS
# A Bipartite Graph is a graph whose vertices can be divided into two independent sets,
# U and V such that every edge (u, v) either connects a vertex from U to V or a vertex
# from V to U. In other words, for every edge (u, v), either u belongs to U and v to V,
# or u belongs to V and v to U. We can also say that there is no edge that connects
# vertices of same set.
from queue import Queue
def check_bipartite(graph):
queue = Queue()
visited = [False] * len(graph)
color = [-1] * len(graph)
def bfs():
while not queue.empty():
u = queue.get()
visited[u] = True
for neighbour in graph[u]:
if neighbour == u:
return False
if color[neighbour] == -1:
color[neighbour] = 1 - color[u]
queue.put(neighbour)
elif color[neighbour] == color[u]:
return False
return True
for i in range(len(graph)):
if not visited[i]:
queue.put(i)
color[i] = 0
if bfs() is False:
return False
return True
if __name__ == "__main__":
# Adjacency List of graph
print(check_bipartite({0: [1, 3], 1: [0, 2], 2: [1, 3], 3: [0, 2]}))
| # Check whether Graph is Bipartite or Not using BFS
# A Bipartite Graph is a graph whose vertices can be divided into two independent sets,
# U and V such that every edge (u, v) either connects a vertex from U to V or a vertex
# from V to U. In other words, for every edge (u, v), either u belongs to U and v to V,
# or u belongs to V and v to U. We can also say that there is no edge that connects
# vertices of same set.
from queue import Queue
def check_bipartite(graph):
queue = Queue()
visited = [False] * len(graph)
color = [-1] * len(graph)
def bfs():
while not queue.empty():
u = queue.get()
visited[u] = True
for neighbour in graph[u]:
if neighbour == u:
return False
if color[neighbour] == -1:
color[neighbour] = 1 - color[u]
queue.put(neighbour)
elif color[neighbour] == color[u]:
return False
return True
for i in range(len(graph)):
if not visited[i]:
queue.put(i)
color[i] = 0
if bfs() is False:
return False
return True
if __name__ == "__main__":
# Adjacency List of graph
print(check_bipartite({0: [1, 3], 1: [0, 2], 2: [1, 3], 3: [0, 2]}))
| -1 |
TheAlgorithms/Python | 9,576 | Upgrade to Python 3.12 | ### Describe your change:
Repeats #8777
Repeats #9306
* #8777
* #9306
**Disables** algorithms based on `qiskit` and `tensorflow` because those modules are not yet compatible with Python 3.12.
* https://github.com/tensorflow/tensorflow/releases
* https://github.com/Qiskit/qiskit/issues/10887
---
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
| cclauss | "2023-10-03T08:13:43Z" | "2023-10-03T09:17:11Z" | f964dcbf2ff7c70e4aca20532a38dfb02ce8a4c0 | 0f4e51245f33175b4fb311f633d3821210741bdd | Upgrade to Python 3.12. ### Describe your change:
Repeats #8777
Repeats #9306
* #8777
* #9306
**Disables** algorithms based on `qiskit` and `tensorflow` because those modules are not yet compatible with Python 3.12.
* https://github.com/tensorflow/tensorflow/releases
* https://github.com/Qiskit/qiskit/issues/10887
---
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
| from __future__ import annotations
from collections import deque
class Automaton:
def __init__(self, keywords: list[str]):
self.adlist: list[dict] = []
self.adlist.append(
{"value": "", "next_states": [], "fail_state": 0, "output": []}
)
for keyword in keywords:
self.add_keyword(keyword)
self.set_fail_transitions()
def find_next_state(self, current_state: int, char: str) -> int | None:
for state in self.adlist[current_state]["next_states"]:
if char == self.adlist[state]["value"]:
return state
return None
def add_keyword(self, keyword: str) -> None:
current_state = 0
for character in keyword:
next_state = self.find_next_state(current_state, character)
if next_state is None:
self.adlist.append(
{
"value": character,
"next_states": [],
"fail_state": 0,
"output": [],
}
)
self.adlist[current_state]["next_states"].append(len(self.adlist) - 1)
current_state = len(self.adlist) - 1
else:
current_state = next_state
self.adlist[current_state]["output"].append(keyword)
def set_fail_transitions(self) -> None:
q: deque = deque()
for node in self.adlist[0]["next_states"]:
q.append(node)
self.adlist[node]["fail_state"] = 0
while q:
r = q.popleft()
for child in self.adlist[r]["next_states"]:
q.append(child)
state = self.adlist[r]["fail_state"]
while (
self.find_next_state(state, self.adlist[child]["value"]) is None
and state != 0
):
state = self.adlist[state]["fail_state"]
self.adlist[child]["fail_state"] = self.find_next_state(
state, self.adlist[child]["value"]
)
if self.adlist[child]["fail_state"] is None:
self.adlist[child]["fail_state"] = 0
self.adlist[child]["output"] = (
self.adlist[child]["output"]
+ self.adlist[self.adlist[child]["fail_state"]]["output"]
)
def search_in(self, string: str) -> dict[str, list[int]]:
"""
>>> A = Automaton(["what", "hat", "ver", "er"])
>>> A.search_in("whatever, err ... , wherever")
{'what': [0], 'hat': [1], 'ver': [5, 25], 'er': [6, 10, 22, 26]}
"""
result: dict = {} # returns a dict with keywords and list of its occurrences
current_state = 0
for i in range(len(string)):
while (
self.find_next_state(current_state, string[i]) is None
and current_state != 0
):
current_state = self.adlist[current_state]["fail_state"]
next_state = self.find_next_state(current_state, string[i])
if next_state is None:
current_state = 0
else:
current_state = next_state
for key in self.adlist[current_state]["output"]:
if key not in result:
result[key] = []
result[key].append(i - len(key) + 1)
return result
if __name__ == "__main__":
import doctest
doctest.testmod()
| from __future__ import annotations
from collections import deque
class Automaton:
def __init__(self, keywords: list[str]):
self.adlist: list[dict] = []
self.adlist.append(
{"value": "", "next_states": [], "fail_state": 0, "output": []}
)
for keyword in keywords:
self.add_keyword(keyword)
self.set_fail_transitions()
def find_next_state(self, current_state: int, char: str) -> int | None:
for state in self.adlist[current_state]["next_states"]:
if char == self.adlist[state]["value"]:
return state
return None
def add_keyword(self, keyword: str) -> None:
current_state = 0
for character in keyword:
next_state = self.find_next_state(current_state, character)
if next_state is None:
self.adlist.append(
{
"value": character,
"next_states": [],
"fail_state": 0,
"output": [],
}
)
self.adlist[current_state]["next_states"].append(len(self.adlist) - 1)
current_state = len(self.adlist) - 1
else:
current_state = next_state
self.adlist[current_state]["output"].append(keyword)
def set_fail_transitions(self) -> None:
q: deque = deque()
for node in self.adlist[0]["next_states"]:
q.append(node)
self.adlist[node]["fail_state"] = 0
while q:
r = q.popleft()
for child in self.adlist[r]["next_states"]:
q.append(child)
state = self.adlist[r]["fail_state"]
while (
self.find_next_state(state, self.adlist[child]["value"]) is None
and state != 0
):
state = self.adlist[state]["fail_state"]
self.adlist[child]["fail_state"] = self.find_next_state(
state, self.adlist[child]["value"]
)
if self.adlist[child]["fail_state"] is None:
self.adlist[child]["fail_state"] = 0
self.adlist[child]["output"] = (
self.adlist[child]["output"]
+ self.adlist[self.adlist[child]["fail_state"]]["output"]
)
def search_in(self, string: str) -> dict[str, list[int]]:
"""
>>> A = Automaton(["what", "hat", "ver", "er"])
>>> A.search_in("whatever, err ... , wherever")
{'what': [0], 'hat': [1], 'ver': [5, 25], 'er': [6, 10, 22, 26]}
"""
result: dict = {} # returns a dict with keywords and list of its occurrences
current_state = 0
for i in range(len(string)):
while (
self.find_next_state(current_state, string[i]) is None
and current_state != 0
):
current_state = self.adlist[current_state]["fail_state"]
next_state = self.find_next_state(current_state, string[i])
if next_state is None:
current_state = 0
else:
current_state = next_state
for key in self.adlist[current_state]["output"]:
if key not in result:
result[key] = []
result[key].append(i - len(key) + 1)
return result
if __name__ == "__main__":
import doctest
doctest.testmod()
| -1 |
TheAlgorithms/Python | 9,576 | Upgrade to Python 3.12 | ### Describe your change:
Repeats #8777
Repeats #9306
* #8777
* #9306
**Disables** algorithms based on `qiskit` and `tensorflow` because those modules are not yet compatible with Python 3.12.
* https://github.com/tensorflow/tensorflow/releases
* https://github.com/Qiskit/qiskit/issues/10887
---
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
| cclauss | "2023-10-03T08:13:43Z" | "2023-10-03T09:17:11Z" | f964dcbf2ff7c70e4aca20532a38dfb02ce8a4c0 | 0f4e51245f33175b4fb311f633d3821210741bdd | Upgrade to Python 3.12. ### Describe your change:
Repeats #8777
Repeats #9306
* #8777
* #9306
**Disables** algorithms based on `qiskit` and `tensorflow` because those modules are not yet compatible with Python 3.12.
* https://github.com/tensorflow/tensorflow/releases
* https://github.com/Qiskit/qiskit/issues/10887
---
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
| # Author: JoΓ£o Gustavo A. Amorim & Gabriel Kunz
# Author email: [email protected] and [email protected]
# Coding date: apr 2019
# Black: True
"""
* This code implement the Hamming code:
https://en.wikipedia.org/wiki/Hamming_code - In telecommunication,
Hamming codes are a family of linear error-correcting codes. Hamming
codes can detect up to two-bit errors or correct one-bit errors
without detection of uncorrected errors. By contrast, the simple
parity code cannot correct errors, and can detect only an odd number
of bits in error. Hamming codes are perfect codes, that is, they
achieve the highest possible rate for codes with their block length
and minimum distance of three.
* the implemented code consists of:
* a function responsible for encoding the message (emitterConverter)
* return the encoded message
* a function responsible for decoding the message (receptorConverter)
* return the decoded message and a ack of data integrity
* how to use:
to be used you must declare how many parity bits (sizePari)
you want to include in the message.
it is desired (for test purposes) to select a bit to be set
as an error. This serves to check whether the code is working correctly.
Lastly, the variable of the message/word that must be desired to be
encoded (text).
* how this work:
declaration of variables (sizePari, be, text)
converts the message/word (text) to binary using the
text_to_bits function
encodes the message using the rules of hamming encoding
decodes the message using the rules of hamming encoding
print the original message, the encoded message and the
decoded message
forces an error in the coded text variable
decodes the message that was forced the error
print the original message, the encoded message, the bit changed
message and the decoded message
"""
# Imports
import numpy as np
# Functions of binary conversion--------------------------------------
def text_to_bits(text, encoding="utf-8", errors="surrogatepass"):
"""
>>> text_to_bits("msg")
'011011010111001101100111'
"""
bits = bin(int.from_bytes(text.encode(encoding, errors), "big"))[2:]
return bits.zfill(8 * ((len(bits) + 7) // 8))
def text_from_bits(bits, encoding="utf-8", errors="surrogatepass"):
"""
>>> text_from_bits('011011010111001101100111')
'msg'
"""
n = int(bits, 2)
return n.to_bytes((n.bit_length() + 7) // 8, "big").decode(encoding, errors) or "\0"
# Functions of hamming code-------------------------------------------
def emitter_converter(size_par, data):
"""
:param size_par: how many parity bits the message must have
:param data: information bits
:return: message to be transmitted by unreliable medium
- bits of information merged with parity bits
>>> emitter_converter(4, "101010111111")
['1', '1', '1', '1', '0', '1', '0', '0', '1', '0', '1', '1', '1', '1', '1', '1']
"""
if size_par + len(data) <= 2**size_par - (len(data) - 1):
raise ValueError("size of parity don't match with size of data")
data_out = []
parity = []
bin_pos = [bin(x)[2:] for x in range(1, size_par + len(data) + 1)]
# sorted information data for the size of the output data
data_ord = []
# data position template + parity
data_out_gab = []
# parity bit counter
qtd_bp = 0
# counter position of data bits
cont_data = 0
for x in range(1, size_par + len(data) + 1):
# Performs a template of bit positions - who should be given,
# and who should be parity
if qtd_bp < size_par:
if (np.log(x) / np.log(2)).is_integer():
data_out_gab.append("P")
qtd_bp = qtd_bp + 1
else:
data_out_gab.append("D")
else:
data_out_gab.append("D")
# Sorts the data to the new output size
if data_out_gab[-1] == "D":
data_ord.append(data[cont_data])
cont_data += 1
else:
data_ord.append(None)
# Calculates parity
qtd_bp = 0 # parity bit counter
for bp in range(1, size_par + 1):
# Bit counter one for a given parity
cont_bo = 0
# counter to control the loop reading
cont_loop = 0
for x in data_ord:
if x is not None:
try:
aux = (bin_pos[cont_loop])[-1 * (bp)]
except IndexError:
aux = "0"
if aux == "1" and x == "1":
cont_bo += 1
cont_loop += 1
parity.append(cont_bo % 2)
qtd_bp += 1
# Mount the message
cont_bp = 0 # parity bit counter
for x in range(size_par + len(data)):
if data_ord[x] is None:
data_out.append(str(parity[cont_bp]))
cont_bp += 1
else:
data_out.append(data_ord[x])
return data_out
def receptor_converter(size_par, data):
"""
>>> receptor_converter(4, "1111010010111111")
(['1', '0', '1', '0', '1', '0', '1', '1', '1', '1', '1', '1'], True)
"""
# data position template + parity
data_out_gab = []
# Parity bit counter
qtd_bp = 0
# Counter p data bit reading
cont_data = 0
# list of parity received
parity_received = []
data_output = []
for x in range(1, len(data) + 1):
# Performs a template of bit positions - who should be given,
# and who should be parity
if qtd_bp < size_par and (np.log(x) / np.log(2)).is_integer():
data_out_gab.append("P")
qtd_bp = qtd_bp + 1
else:
data_out_gab.append("D")
# Sorts the data to the new output size
if data_out_gab[-1] == "D":
data_output.append(data[cont_data])
else:
parity_received.append(data[cont_data])
cont_data += 1
# -----------calculates the parity with the data
data_out = []
parity = []
bin_pos = [bin(x)[2:] for x in range(1, size_par + len(data_output) + 1)]
# sorted information data for the size of the output data
data_ord = []
# Data position feedback + parity
data_out_gab = []
# Parity bit counter
qtd_bp = 0
# Counter p data bit reading
cont_data = 0
for x in range(1, size_par + len(data_output) + 1):
# Performs a template position of bits - who should be given,
# and who should be parity
if qtd_bp < size_par and (np.log(x) / np.log(2)).is_integer():
data_out_gab.append("P")
qtd_bp = qtd_bp + 1
else:
data_out_gab.append("D")
# Sorts the data to the new output size
if data_out_gab[-1] == "D":
data_ord.append(data_output[cont_data])
cont_data += 1
else:
data_ord.append(None)
# Calculates parity
qtd_bp = 0 # parity bit counter
for bp in range(1, size_par + 1):
# Bit counter one for a certain parity
cont_bo = 0
# Counter to control loop reading
cont_loop = 0
for x in data_ord:
if x is not None:
try:
aux = (bin_pos[cont_loop])[-1 * (bp)]
except IndexError:
aux = "0"
if aux == "1" and x == "1":
cont_bo += 1
cont_loop += 1
parity.append(str(cont_bo % 2))
qtd_bp += 1
# Mount the message
cont_bp = 0 # Parity bit counter
for x in range(size_par + len(data_output)):
if data_ord[x] is None:
data_out.append(str(parity[cont_bp]))
cont_bp += 1
else:
data_out.append(data_ord[x])
ack = parity_received == parity
return data_output, ack
# ---------------------------------------------------------------------
"""
# Example how to use
# number of parity bits
sizePari = 4
# location of the bit that will be forced an error
be = 2
# Message/word to be encoded and decoded with hamming
# text = input("Enter the word to be read: ")
text = "Message01"
# Convert the message to binary
binaryText = text_to_bits(text)
# Prints the binary of the string
print("Text input in binary is '" + binaryText + "'")
# total transmitted bits
totalBits = len(binaryText) + sizePari
print("Size of data is " + str(totalBits))
print("\n --Message exchange--")
print("Data to send ------------> " + binaryText)
dataOut = emitterConverter(sizePari, binaryText)
print("Data converted ----------> " + "".join(dataOut))
dataReceiv, ack = receptorConverter(sizePari, dataOut)
print(
"Data receive ------------> "
+ "".join(dataReceiv)
+ "\t\t -- Data integrity: "
+ str(ack)
)
print("\n --Force error--")
print("Data to send ------------> " + binaryText)
dataOut = emitterConverter(sizePari, binaryText)
print("Data converted ----------> " + "".join(dataOut))
# forces error
dataOut[-be] = "1" * (dataOut[-be] == "0") + "0" * (dataOut[-be] == "1")
print("Data after transmission -> " + "".join(dataOut))
dataReceiv, ack = receptorConverter(sizePari, dataOut)
print(
"Data receive ------------> "
+ "".join(dataReceiv)
+ "\t\t -- Data integrity: "
+ str(ack)
)
"""
| # Author: JoΓ£o Gustavo A. Amorim & Gabriel Kunz
# Author email: [email protected] and [email protected]
# Coding date: apr 2019
# Black: True
"""
* This code implement the Hamming code:
https://en.wikipedia.org/wiki/Hamming_code - In telecommunication,
Hamming codes are a family of linear error-correcting codes. Hamming
codes can detect up to two-bit errors or correct one-bit errors
without detection of uncorrected errors. By contrast, the simple
parity code cannot correct errors, and can detect only an odd number
of bits in error. Hamming codes are perfect codes, that is, they
achieve the highest possible rate for codes with their block length
and minimum distance of three.
* the implemented code consists of:
* a function responsible for encoding the message (emitterConverter)
* return the encoded message
* a function responsible for decoding the message (receptorConverter)
* return the decoded message and a ack of data integrity
* how to use:
to be used you must declare how many parity bits (sizePari)
you want to include in the message.
it is desired (for test purposes) to select a bit to be set
as an error. This serves to check whether the code is working correctly.
Lastly, the variable of the message/word that must be desired to be
encoded (text).
* how this work:
declaration of variables (sizePari, be, text)
converts the message/word (text) to binary using the
text_to_bits function
encodes the message using the rules of hamming encoding
decodes the message using the rules of hamming encoding
print the original message, the encoded message and the
decoded message
forces an error in the coded text variable
decodes the message that was forced the error
print the original message, the encoded message, the bit changed
message and the decoded message
"""
# Imports
import numpy as np
# Functions of binary conversion--------------------------------------
def text_to_bits(text, encoding="utf-8", errors="surrogatepass"):
"""
>>> text_to_bits("msg")
'011011010111001101100111'
"""
bits = bin(int.from_bytes(text.encode(encoding, errors), "big"))[2:]
return bits.zfill(8 * ((len(bits) + 7) // 8))
def text_from_bits(bits, encoding="utf-8", errors="surrogatepass"):
"""
>>> text_from_bits('011011010111001101100111')
'msg'
"""
n = int(bits, 2)
return n.to_bytes((n.bit_length() + 7) // 8, "big").decode(encoding, errors) or "\0"
# Functions of hamming code-------------------------------------------
def emitter_converter(size_par, data):
"""
:param size_par: how many parity bits the message must have
:param data: information bits
:return: message to be transmitted by unreliable medium
- bits of information merged with parity bits
>>> emitter_converter(4, "101010111111")
['1', '1', '1', '1', '0', '1', '0', '0', '1', '0', '1', '1', '1', '1', '1', '1']
"""
if size_par + len(data) <= 2**size_par - (len(data) - 1):
raise ValueError("size of parity don't match with size of data")
data_out = []
parity = []
bin_pos = [bin(x)[2:] for x in range(1, size_par + len(data) + 1)]
# sorted information data for the size of the output data
data_ord = []
# data position template + parity
data_out_gab = []
# parity bit counter
qtd_bp = 0
# counter position of data bits
cont_data = 0
for x in range(1, size_par + len(data) + 1):
# Performs a template of bit positions - who should be given,
# and who should be parity
if qtd_bp < size_par:
if (np.log(x) / np.log(2)).is_integer():
data_out_gab.append("P")
qtd_bp = qtd_bp + 1
else:
data_out_gab.append("D")
else:
data_out_gab.append("D")
# Sorts the data to the new output size
if data_out_gab[-1] == "D":
data_ord.append(data[cont_data])
cont_data += 1
else:
data_ord.append(None)
# Calculates parity
qtd_bp = 0 # parity bit counter
for bp in range(1, size_par + 1):
# Bit counter one for a given parity
cont_bo = 0
# counter to control the loop reading
cont_loop = 0
for x in data_ord:
if x is not None:
try:
aux = (bin_pos[cont_loop])[-1 * (bp)]
except IndexError:
aux = "0"
if aux == "1" and x == "1":
cont_bo += 1
cont_loop += 1
parity.append(cont_bo % 2)
qtd_bp += 1
# Mount the message
cont_bp = 0 # parity bit counter
for x in range(size_par + len(data)):
if data_ord[x] is None:
data_out.append(str(parity[cont_bp]))
cont_bp += 1
else:
data_out.append(data_ord[x])
return data_out
def receptor_converter(size_par, data):
"""
>>> receptor_converter(4, "1111010010111111")
(['1', '0', '1', '0', '1', '0', '1', '1', '1', '1', '1', '1'], True)
"""
# data position template + parity
data_out_gab = []
# Parity bit counter
qtd_bp = 0
# Counter p data bit reading
cont_data = 0
# list of parity received
parity_received = []
data_output = []
for x in range(1, len(data) + 1):
# Performs a template of bit positions - who should be given,
# and who should be parity
if qtd_bp < size_par and (np.log(x) / np.log(2)).is_integer():
data_out_gab.append("P")
qtd_bp = qtd_bp + 1
else:
data_out_gab.append("D")
# Sorts the data to the new output size
if data_out_gab[-1] == "D":
data_output.append(data[cont_data])
else:
parity_received.append(data[cont_data])
cont_data += 1
# -----------calculates the parity with the data
data_out = []
parity = []
bin_pos = [bin(x)[2:] for x in range(1, size_par + len(data_output) + 1)]
# sorted information data for the size of the output data
data_ord = []
# Data position feedback + parity
data_out_gab = []
# Parity bit counter
qtd_bp = 0
# Counter p data bit reading
cont_data = 0
for x in range(1, size_par + len(data_output) + 1):
# Performs a template position of bits - who should be given,
# and who should be parity
if qtd_bp < size_par and (np.log(x) / np.log(2)).is_integer():
data_out_gab.append("P")
qtd_bp = qtd_bp + 1
else:
data_out_gab.append("D")
# Sorts the data to the new output size
if data_out_gab[-1] == "D":
data_ord.append(data_output[cont_data])
cont_data += 1
else:
data_ord.append(None)
# Calculates parity
qtd_bp = 0 # parity bit counter
for bp in range(1, size_par + 1):
# Bit counter one for a certain parity
cont_bo = 0
# Counter to control loop reading
cont_loop = 0
for x in data_ord:
if x is not None:
try:
aux = (bin_pos[cont_loop])[-1 * (bp)]
except IndexError:
aux = "0"
if aux == "1" and x == "1":
cont_bo += 1
cont_loop += 1
parity.append(str(cont_bo % 2))
qtd_bp += 1
# Mount the message
cont_bp = 0 # Parity bit counter
for x in range(size_par + len(data_output)):
if data_ord[x] is None:
data_out.append(str(parity[cont_bp]))
cont_bp += 1
else:
data_out.append(data_ord[x])
ack = parity_received == parity
return data_output, ack
# ---------------------------------------------------------------------
"""
# Example how to use
# number of parity bits
sizePari = 4
# location of the bit that will be forced an error
be = 2
# Message/word to be encoded and decoded with hamming
# text = input("Enter the word to be read: ")
text = "Message01"
# Convert the message to binary
binaryText = text_to_bits(text)
# Prints the binary of the string
print("Text input in binary is '" + binaryText + "'")
# total transmitted bits
totalBits = len(binaryText) + sizePari
print("Size of data is " + str(totalBits))
print("\n --Message exchange--")
print("Data to send ------------> " + binaryText)
dataOut = emitterConverter(sizePari, binaryText)
print("Data converted ----------> " + "".join(dataOut))
dataReceiv, ack = receptorConverter(sizePari, dataOut)
print(
"Data receive ------------> "
+ "".join(dataReceiv)
+ "\t\t -- Data integrity: "
+ str(ack)
)
print("\n --Force error--")
print("Data to send ------------> " + binaryText)
dataOut = emitterConverter(sizePari, binaryText)
print("Data converted ----------> " + "".join(dataOut))
# forces error
dataOut[-be] = "1" * (dataOut[-be] == "0") + "0" * (dataOut[-be] == "1")
print("Data after transmission -> " + "".join(dataOut))
dataReceiv, ack = receptorConverter(sizePari, dataOut)
print(
"Data receive ------------> "
+ "".join(dataReceiv)
+ "\t\t -- Data integrity: "
+ str(ack)
)
"""
| -1 |
TheAlgorithms/Python | 9,576 | Upgrade to Python 3.12 | ### Describe your change:
Repeats #8777
Repeats #9306
* #8777
* #9306
**Disables** algorithms based on `qiskit` and `tensorflow` because those modules are not yet compatible with Python 3.12.
* https://github.com/tensorflow/tensorflow/releases
* https://github.com/Qiskit/qiskit/issues/10887
---
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
| cclauss | "2023-10-03T08:13:43Z" | "2023-10-03T09:17:11Z" | f964dcbf2ff7c70e4aca20532a38dfb02ce8a4c0 | 0f4e51245f33175b4fb311f633d3821210741bdd | Upgrade to Python 3.12. ### Describe your change:
Repeats #8777
Repeats #9306
* #8777
* #9306
**Disables** algorithms based on `qiskit` and `tensorflow` because those modules are not yet compatible with Python 3.12.
* https://github.com/tensorflow/tensorflow/releases
* https://github.com/Qiskit/qiskit/issues/10887
---
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
| def max_product_subarray(numbers: list[int]) -> int:
"""
Returns the maximum product that can be obtained by multiplying a
contiguous subarray of the given integer list `nums`.
Example:
>>> max_product_subarray([2, 3, -2, 4])
6
>>> max_product_subarray((-2, 0, -1))
0
>>> max_product_subarray([2, 3, -2, 4, -1])
48
>>> max_product_subarray([-1])
-1
>>> max_product_subarray([0])
0
>>> max_product_subarray([])
0
>>> max_product_subarray("")
0
>>> max_product_subarray(None)
0
>>> max_product_subarray([2, 3, -2, 4.5, -1])
Traceback (most recent call last):
...
ValueError: numbers must be an iterable of integers
>>> max_product_subarray("ABC")
Traceback (most recent call last):
...
ValueError: numbers must be an iterable of integers
"""
if not numbers:
return 0
if not isinstance(numbers, (list, tuple)) or not all(
isinstance(number, int) for number in numbers
):
raise ValueError("numbers must be an iterable of integers")
max_till_now = min_till_now = max_prod = numbers[0]
for i in range(1, len(numbers)):
# update the maximum and minimum subarray products
number = numbers[i]
if number < 0:
max_till_now, min_till_now = min_till_now, max_till_now
max_till_now = max(number, max_till_now * number)
min_till_now = min(number, min_till_now * number)
# update the maximum product found till now
max_prod = max(max_prod, max_till_now)
return max_prod
| def max_product_subarray(numbers: list[int]) -> int:
"""
Returns the maximum product that can be obtained by multiplying a
contiguous subarray of the given integer list `nums`.
Example:
>>> max_product_subarray([2, 3, -2, 4])
6
>>> max_product_subarray((-2, 0, -1))
0
>>> max_product_subarray([2, 3, -2, 4, -1])
48
>>> max_product_subarray([-1])
-1
>>> max_product_subarray([0])
0
>>> max_product_subarray([])
0
>>> max_product_subarray("")
0
>>> max_product_subarray(None)
0
>>> max_product_subarray([2, 3, -2, 4.5, -1])
Traceback (most recent call last):
...
ValueError: numbers must be an iterable of integers
>>> max_product_subarray("ABC")
Traceback (most recent call last):
...
ValueError: numbers must be an iterable of integers
"""
if not numbers:
return 0
if not isinstance(numbers, (list, tuple)) or not all(
isinstance(number, int) for number in numbers
):
raise ValueError("numbers must be an iterable of integers")
max_till_now = min_till_now = max_prod = numbers[0]
for i in range(1, len(numbers)):
# update the maximum and minimum subarray products
number = numbers[i]
if number < 0:
max_till_now, min_till_now = min_till_now, max_till_now
max_till_now = max(number, max_till_now * number)
min_till_now = min(number, min_till_now * number)
# update the maximum product found till now
max_prod = max(max_prod, max_till_now)
return max_prod
| -1 |
TheAlgorithms/Python | 9,576 | Upgrade to Python 3.12 | ### Describe your change:
Repeats #8777
Repeats #9306
* #8777
* #9306
**Disables** algorithms based on `qiskit` and `tensorflow` because those modules are not yet compatible with Python 3.12.
* https://github.com/tensorflow/tensorflow/releases
* https://github.com/Qiskit/qiskit/issues/10887
---
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
| cclauss | "2023-10-03T08:13:43Z" | "2023-10-03T09:17:11Z" | f964dcbf2ff7c70e4aca20532a38dfb02ce8a4c0 | 0f4e51245f33175b4fb311f633d3821210741bdd | Upgrade to Python 3.12. ### Describe your change:
Repeats #8777
Repeats #9306
* #8777
* #9306
**Disables** algorithms based on `qiskit` and `tensorflow` because those modules are not yet compatible with Python 3.12.
* https://github.com/tensorflow/tensorflow/releases
* https://github.com/Qiskit/qiskit/issues/10887
---
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
| """
Title : Finding the value of either Gravitational Force, one of the masses or distance
provided that the other three parameters are given.
Description : Newton's Law of Universal Gravitation explains the presence of force of
attraction between bodies having a definite mass situated at a distance. It is usually
stated as that, every particle attracts every other particle in the universe with a
force that is directly proportional to the product of their masses and inversely
proportional to the square of the distance between their centers. The publication of the
theory has become known as the "first great unification", as it marked the unification
of the previously described phenomena of gravity on Earth with known astronomical
behaviors.
The equation for the universal gravitation is as follows:
F = (G * mass_1 * mass_2) / (distance)^2
Source :
- https://en.wikipedia.org/wiki/Newton%27s_law_of_universal_gravitation
- Newton (1687) "Philosophiæ Naturalis Principia Mathematica"
"""
from __future__ import annotations
# Define the Gravitational Constant G and the function
GRAVITATIONAL_CONSTANT = 6.6743e-11 # unit of G : m^3 * kg^-1 * s^-2
def gravitational_law(
force: float, mass_1: float, mass_2: float, distance: float
) -> dict[str, float]:
"""
Input Parameters
----------------
force : magnitude in Newtons
mass_1 : mass in Kilograms
mass_2 : mass in Kilograms
distance : distance in Meters
Returns
-------
result : dict name, value pair of the parameter having Zero as it's value
Returns the value of one of the parameters specified as 0, provided the values of
other parameters are given.
>>> gravitational_law(force=0, mass_1=5, mass_2=10, distance=20)
{'force': 8.342875e-12}
>>> gravitational_law(force=7367.382, mass_1=0, mass_2=74, distance=3048)
{'mass_1': 1.385816317292268e+19}
>>> gravitational_law(force=36337.283, mass_1=0, mass_2=0, distance=35584)
Traceback (most recent call last):
...
ValueError: One and only one argument must be 0
>>> gravitational_law(force=36337.283, mass_1=-674, mass_2=0, distance=35584)
Traceback (most recent call last):
...
ValueError: Mass can not be negative
>>> gravitational_law(force=-847938e12, mass_1=674, mass_2=0, distance=9374)
Traceback (most recent call last):
...
ValueError: Gravitational force can not be negative
"""
product_of_mass = mass_1 * mass_2
if (force, mass_1, mass_2, distance).count(0) != 1:
raise ValueError("One and only one argument must be 0")
if force < 0:
raise ValueError("Gravitational force can not be negative")
if distance < 0:
raise ValueError("Distance can not be negative")
if mass_1 < 0 or mass_2 < 0:
raise ValueError("Mass can not be negative")
if force == 0:
force = GRAVITATIONAL_CONSTANT * product_of_mass / (distance**2)
return {"force": force}
elif mass_1 == 0:
mass_1 = (force) * (distance**2) / (GRAVITATIONAL_CONSTANT * mass_2)
return {"mass_1": mass_1}
elif mass_2 == 0:
mass_2 = (force) * (distance**2) / (GRAVITATIONAL_CONSTANT * mass_1)
return {"mass_2": mass_2}
elif distance == 0:
distance = (GRAVITATIONAL_CONSTANT * product_of_mass / (force)) ** 0.5
return {"distance": distance}
raise ValueError("One and only one argument must be 0")
# Run doctest
if __name__ == "__main__":
import doctest
doctest.testmod()
| """
Title : Finding the value of either Gravitational Force, one of the masses or distance
provided that the other three parameters are given.
Description : Newton's Law of Universal Gravitation explains the presence of force of
attraction between bodies having a definite mass situated at a distance. It is usually
stated as that, every particle attracts every other particle in the universe with a
force that is directly proportional to the product of their masses and inversely
proportional to the square of the distance between their centers. The publication of the
theory has become known as the "first great unification", as it marked the unification
of the previously described phenomena of gravity on Earth with known astronomical
behaviors.
The equation for the universal gravitation is as follows:
F = (G * mass_1 * mass_2) / (distance)^2
Source :
- https://en.wikipedia.org/wiki/Newton%27s_law_of_universal_gravitation
- Newton (1687) "Philosophiæ Naturalis Principia Mathematica"
"""
from __future__ import annotations
# Define the Gravitational Constant G and the function
GRAVITATIONAL_CONSTANT = 6.6743e-11 # unit of G : m^3 * kg^-1 * s^-2
def gravitational_law(
force: float, mass_1: float, mass_2: float, distance: float
) -> dict[str, float]:
"""
Input Parameters
----------------
force : magnitude in Newtons
mass_1 : mass in Kilograms
mass_2 : mass in Kilograms
distance : distance in Meters
Returns
-------
result : dict name, value pair of the parameter having Zero as it's value
Returns the value of one of the parameters specified as 0, provided the values of
other parameters are given.
>>> gravitational_law(force=0, mass_1=5, mass_2=10, distance=20)
{'force': 8.342875e-12}
>>> gravitational_law(force=7367.382, mass_1=0, mass_2=74, distance=3048)
{'mass_1': 1.385816317292268e+19}
>>> gravitational_law(force=36337.283, mass_1=0, mass_2=0, distance=35584)
Traceback (most recent call last):
...
ValueError: One and only one argument must be 0
>>> gravitational_law(force=36337.283, mass_1=-674, mass_2=0, distance=35584)
Traceback (most recent call last):
...
ValueError: Mass can not be negative
>>> gravitational_law(force=-847938e12, mass_1=674, mass_2=0, distance=9374)
Traceback (most recent call last):
...
ValueError: Gravitational force can not be negative
"""
product_of_mass = mass_1 * mass_2
if (force, mass_1, mass_2, distance).count(0) != 1:
raise ValueError("One and only one argument must be 0")
if force < 0:
raise ValueError("Gravitational force can not be negative")
if distance < 0:
raise ValueError("Distance can not be negative")
if mass_1 < 0 or mass_2 < 0:
raise ValueError("Mass can not be negative")
if force == 0:
force = GRAVITATIONAL_CONSTANT * product_of_mass / (distance**2)
return {"force": force}
elif mass_1 == 0:
mass_1 = (force) * (distance**2) / (GRAVITATIONAL_CONSTANT * mass_2)
return {"mass_1": mass_1}
elif mass_2 == 0:
mass_2 = (force) * (distance**2) / (GRAVITATIONAL_CONSTANT * mass_1)
return {"mass_2": mass_2}
elif distance == 0:
distance = (GRAVITATIONAL_CONSTANT * product_of_mass / (force)) ** 0.5
return {"distance": distance}
raise ValueError("One and only one argument must be 0")
# Run doctest
if __name__ == "__main__":
import doctest
doctest.testmod()
| -1 |
TheAlgorithms/Python | 9,576 | Upgrade to Python 3.12 | ### Describe your change:
Repeats #8777
Repeats #9306
* #8777
* #9306
**Disables** algorithms based on `qiskit` and `tensorflow` because those modules are not yet compatible with Python 3.12.
* https://github.com/tensorflow/tensorflow/releases
* https://github.com/Qiskit/qiskit/issues/10887
---
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
| cclauss | "2023-10-03T08:13:43Z" | "2023-10-03T09:17:11Z" | f964dcbf2ff7c70e4aca20532a38dfb02ce8a4c0 | 0f4e51245f33175b4fb311f633d3821210741bdd | Upgrade to Python 3.12. ### Describe your change:
Repeats #8777
Repeats #9306
* #8777
* #9306
**Disables** algorithms based on `qiskit` and `tensorflow` because those modules are not yet compatible with Python 3.12.
* https://github.com/tensorflow/tensorflow/releases
* https://github.com/Qiskit/qiskit/issues/10887
---
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
| """
This is a pure Python implementation of the merge sort algorithm.
For doctests run following command:
python -m doctest -v merge_sort.py
or
python3 -m doctest -v merge_sort.py
For manual testing run:
python merge_sort.py
"""
def merge_sort(collection: list) -> list:
"""
:param collection: some mutable ordered collection with heterogeneous
comparable items inside
:return: the same collection ordered by ascending
Examples:
>>> merge_sort([0, 5, 3, 2, 2])
[0, 2, 2, 3, 5]
>>> merge_sort([])
[]
>>> merge_sort([-2, -5, -45])
[-45, -5, -2]
"""
def merge(left: list, right: list) -> list:
"""
Merge left and right.
:param left: left collection
:param right: right collection
:return: merge result
"""
def _merge():
while left and right:
yield (left if left[0] <= right[0] else right).pop(0)
yield from left
yield from right
return list(_merge())
if len(collection) <= 1:
return collection
mid = len(collection) // 2
return merge(merge_sort(collection[:mid]), merge_sort(collection[mid:]))
if __name__ == "__main__":
import doctest
doctest.testmod()
user_input = input("Enter numbers separated by a comma:\n").strip()
unsorted = [int(item) for item in user_input.split(",")]
print(*merge_sort(unsorted), sep=",")
| """
This is a pure Python implementation of the merge sort algorithm.
For doctests run following command:
python -m doctest -v merge_sort.py
or
python3 -m doctest -v merge_sort.py
For manual testing run:
python merge_sort.py
"""
def merge_sort(collection: list) -> list:
"""
:param collection: some mutable ordered collection with heterogeneous
comparable items inside
:return: the same collection ordered by ascending
Examples:
>>> merge_sort([0, 5, 3, 2, 2])
[0, 2, 2, 3, 5]
>>> merge_sort([])
[]
>>> merge_sort([-2, -5, -45])
[-45, -5, -2]
"""
def merge(left: list, right: list) -> list:
"""
Merge left and right.
:param left: left collection
:param right: right collection
:return: merge result
"""
def _merge():
while left and right:
yield (left if left[0] <= right[0] else right).pop(0)
yield from left
yield from right
return list(_merge())
if len(collection) <= 1:
return collection
mid = len(collection) // 2
return merge(merge_sort(collection[:mid]), merge_sort(collection[mid:]))
if __name__ == "__main__":
import doctest
doctest.testmod()
user_input = input("Enter numbers separated by a comma:\n").strip()
unsorted = [int(item) for item in user_input.split(",")]
print(*merge_sort(unsorted), sep=",")
| -1 |
TheAlgorithms/Python | 9,576 | Upgrade to Python 3.12 | ### Describe your change:
Repeats #8777
Repeats #9306
* #8777
* #9306
**Disables** algorithms based on `qiskit` and `tensorflow` because those modules are not yet compatible with Python 3.12.
* https://github.com/tensorflow/tensorflow/releases
* https://github.com/Qiskit/qiskit/issues/10887
---
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
| cclauss | "2023-10-03T08:13:43Z" | "2023-10-03T09:17:11Z" | f964dcbf2ff7c70e4aca20532a38dfb02ce8a4c0 | 0f4e51245f33175b4fb311f633d3821210741bdd | Upgrade to Python 3.12. ### Describe your change:
Repeats #8777
Repeats #9306
* #8777
* #9306
**Disables** algorithms based on `qiskit` and `tensorflow` because those modules are not yet compatible with Python 3.12.
* https://github.com/tensorflow/tensorflow/releases
* https://github.com/Qiskit/qiskit/issues/10887
---
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
| r"""
Problem: Given root of a binary tree, return the:
1. binary-tree-right-side-view
2. binary-tree-left-side-view
3. binary-tree-top-side-view
4. binary-tree-bottom-side-view
"""
from __future__ import annotations
from collections import defaultdict
from dataclasses import dataclass
@dataclass
class TreeNode:
val: int
left: TreeNode | None = None
right: TreeNode | None = None
def make_tree() -> TreeNode:
"""
>>> make_tree().val
3
"""
return TreeNode(3, TreeNode(9), TreeNode(20, TreeNode(15), TreeNode(7)))
def binary_tree_right_side_view(root: TreeNode) -> list[int]:
r"""
Function returns the right side view of binary tree.
3 <- 3
/ \
9 20 <- 20
/ \
15 7 <- 7
>>> binary_tree_right_side_view(make_tree())
[3, 20, 7]
>>> binary_tree_right_side_view(None)
[]
"""
def depth_first_search(
root: TreeNode | None, depth: int, right_view: list[int]
) -> None:
"""
A depth first search preorder traversal to append the values at
right side of tree.
"""
if not root:
return
if depth == len(right_view):
right_view.append(root.val)
depth_first_search(root.right, depth + 1, right_view)
depth_first_search(root.left, depth + 1, right_view)
right_view: list = []
if not root:
return right_view
depth_first_search(root, 0, right_view)
return right_view
def binary_tree_left_side_view(root: TreeNode) -> list[int]:
r"""
Function returns the left side view of binary tree.
3 -> 3
/ \
9 -> 9 20
/ \
15 -> 15 7
>>> binary_tree_left_side_view(make_tree())
[3, 9, 15]
>>> binary_tree_left_side_view(None)
[]
"""
def depth_first_search(
root: TreeNode | None, depth: int, left_view: list[int]
) -> None:
"""
A depth first search preorder traversal to append the values
at left side of tree.
"""
if not root:
return
if depth == len(left_view):
left_view.append(root.val)
depth_first_search(root.left, depth + 1, left_view)
depth_first_search(root.right, depth + 1, left_view)
left_view: list = []
if not root:
return left_view
depth_first_search(root, 0, left_view)
return left_view
def binary_tree_top_side_view(root: TreeNode) -> list[int]:
r"""
Function returns the top side view of binary tree.
9 3 20 7
β¬ β¬ β¬ β¬
3
/ \
9 20
/ \
15 7
>>> binary_tree_top_side_view(make_tree())
[9, 3, 20, 7]
>>> binary_tree_top_side_view(None)
[]
"""
def breadth_first_search(root: TreeNode, top_view: list[int]) -> None:
"""
A breadth first search traversal with defaultdict ds to append
the values of tree from top view
"""
queue = [(root, 0)]
lookup = defaultdict(list)
while queue:
first = queue.pop(0)
node, hd = first
lookup[hd].append(node.val)
if node.left:
queue.append((node.left, hd - 1))
if node.right:
queue.append((node.right, hd + 1))
for pair in sorted(lookup.items(), key=lambda each: each[0]):
top_view.append(pair[1][0])
top_view: list = []
if not root:
return top_view
breadth_first_search(root, top_view)
return top_view
def binary_tree_bottom_side_view(root: TreeNode) -> list[int]:
r"""
Function returns the bottom side view of binary tree
3
/ \
9 20
/ \
15 7
β β β β
9 15 20 7
>>> binary_tree_bottom_side_view(make_tree())
[9, 15, 20, 7]
>>> binary_tree_bottom_side_view(None)
[]
"""
from collections import defaultdict
def breadth_first_search(root: TreeNode, bottom_view: list[int]) -> None:
"""
A breadth first search traversal with defaultdict ds to append
the values of tree from bottom view
"""
queue = [(root, 0)]
lookup = defaultdict(list)
while queue:
first = queue.pop(0)
node, hd = first
lookup[hd].append(node.val)
if node.left:
queue.append((node.left, hd - 1))
if node.right:
queue.append((node.right, hd + 1))
for pair in sorted(lookup.items(), key=lambda each: each[0]):
bottom_view.append(pair[1][-1])
bottom_view: list = []
if not root:
return bottom_view
breadth_first_search(root, bottom_view)
return bottom_view
if __name__ == "__main__":
import doctest
doctest.testmod()
| r"""
Problem: Given root of a binary tree, return the:
1. binary-tree-right-side-view
2. binary-tree-left-side-view
3. binary-tree-top-side-view
4. binary-tree-bottom-side-view
"""
from __future__ import annotations
from collections import defaultdict
from dataclasses import dataclass
@dataclass
class TreeNode:
val: int
left: TreeNode | None = None
right: TreeNode | None = None
def make_tree() -> TreeNode:
"""
>>> make_tree().val
3
"""
return TreeNode(3, TreeNode(9), TreeNode(20, TreeNode(15), TreeNode(7)))
def binary_tree_right_side_view(root: TreeNode) -> list[int]:
r"""
Function returns the right side view of binary tree.
3 <- 3
/ \
9 20 <- 20
/ \
15 7 <- 7
>>> binary_tree_right_side_view(make_tree())
[3, 20, 7]
>>> binary_tree_right_side_view(None)
[]
"""
def depth_first_search(
root: TreeNode | None, depth: int, right_view: list[int]
) -> None:
"""
A depth first search preorder traversal to append the values at
right side of tree.
"""
if not root:
return
if depth == len(right_view):
right_view.append(root.val)
depth_first_search(root.right, depth + 1, right_view)
depth_first_search(root.left, depth + 1, right_view)
right_view: list = []
if not root:
return right_view
depth_first_search(root, 0, right_view)
return right_view
def binary_tree_left_side_view(root: TreeNode) -> list[int]:
r"""
Function returns the left side view of binary tree.
3 -> 3
/ \
9 -> 9 20
/ \
15 -> 15 7
>>> binary_tree_left_side_view(make_tree())
[3, 9, 15]
>>> binary_tree_left_side_view(None)
[]
"""
def depth_first_search(
root: TreeNode | None, depth: int, left_view: list[int]
) -> None:
"""
A depth first search preorder traversal to append the values
at left side of tree.
"""
if not root:
return
if depth == len(left_view):
left_view.append(root.val)
depth_first_search(root.left, depth + 1, left_view)
depth_first_search(root.right, depth + 1, left_view)
left_view: list = []
if not root:
return left_view
depth_first_search(root, 0, left_view)
return left_view
def binary_tree_top_side_view(root: TreeNode) -> list[int]:
r"""
Function returns the top side view of binary tree.
9 3 20 7
β¬ β¬ β¬ β¬
3
/ \
9 20
/ \
15 7
>>> binary_tree_top_side_view(make_tree())
[9, 3, 20, 7]
>>> binary_tree_top_side_view(None)
[]
"""
def breadth_first_search(root: TreeNode, top_view: list[int]) -> None:
"""
A breadth first search traversal with defaultdict ds to append
the values of tree from top view
"""
queue = [(root, 0)]
lookup = defaultdict(list)
while queue:
first = queue.pop(0)
node, hd = first
lookup[hd].append(node.val)
if node.left:
queue.append((node.left, hd - 1))
if node.right:
queue.append((node.right, hd + 1))
for pair in sorted(lookup.items(), key=lambda each: each[0]):
top_view.append(pair[1][0])
top_view: list = []
if not root:
return top_view
breadth_first_search(root, top_view)
return top_view
def binary_tree_bottom_side_view(root: TreeNode) -> list[int]:
r"""
Function returns the bottom side view of binary tree
3
/ \
9 20
/ \
15 7
β β β β
9 15 20 7
>>> binary_tree_bottom_side_view(make_tree())
[9, 15, 20, 7]
>>> binary_tree_bottom_side_view(None)
[]
"""
from collections import defaultdict
def breadth_first_search(root: TreeNode, bottom_view: list[int]) -> None:
"""
A breadth first search traversal with defaultdict ds to append
the values of tree from bottom view
"""
queue = [(root, 0)]
lookup = defaultdict(list)
while queue:
first = queue.pop(0)
node, hd = first
lookup[hd].append(node.val)
if node.left:
queue.append((node.left, hd - 1))
if node.right:
queue.append((node.right, hd + 1))
for pair in sorted(lookup.items(), key=lambda each: each[0]):
bottom_view.append(pair[1][-1])
bottom_view: list = []
if not root:
return bottom_view
breadth_first_search(root, bottom_view)
return bottom_view
if __name__ == "__main__":
import doctest
doctest.testmod()
| -1 |
TheAlgorithms/Python | 9,576 | Upgrade to Python 3.12 | ### Describe your change:
Repeats #8777
Repeats #9306
* #8777
* #9306
**Disables** algorithms based on `qiskit` and `tensorflow` because those modules are not yet compatible with Python 3.12.
* https://github.com/tensorflow/tensorflow/releases
* https://github.com/Qiskit/qiskit/issues/10887
---
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
| cclauss | "2023-10-03T08:13:43Z" | "2023-10-03T09:17:11Z" | f964dcbf2ff7c70e4aca20532a38dfb02ce8a4c0 | 0f4e51245f33175b4fb311f633d3821210741bdd | Upgrade to Python 3.12. ### Describe your change:
Repeats #8777
Repeats #9306
* #8777
* #9306
**Disables** algorithms based on `qiskit` and `tensorflow` because those modules are not yet compatible with Python 3.12.
* https://github.com/tensorflow/tensorflow/releases
* https://github.com/Qiskit/qiskit/issues/10887
---
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
| """
https://en.wikipedia.org/wiki/Floor_and_ceiling_functions
"""
def ceil(x: float) -> int:
"""
Return the ceiling of x as an Integral.
:param x: the number
:return: the smallest integer >= x.
>>> import math
>>> all(ceil(n) == math.ceil(n) for n
... in (1, -1, 0, -0, 1.1, -1.1, 1.0, -1.0, 1_000_000_000))
True
"""
return int(x) if x - int(x) <= 0 else int(x) + 1
if __name__ == "__main__":
import doctest
doctest.testmod()
| """
https://en.wikipedia.org/wiki/Floor_and_ceiling_functions
"""
def ceil(x: float) -> int:
"""
Return the ceiling of x as an Integral.
:param x: the number
:return: the smallest integer >= x.
>>> import math
>>> all(ceil(n) == math.ceil(n) for n
... in (1, -1, 0, -0, 1.1, -1.1, 1.0, -1.0, 1_000_000_000))
True
"""
return int(x) if x - int(x) <= 0 else int(x) + 1
if __name__ == "__main__":
import doctest
doctest.testmod()
| -1 |
TheAlgorithms/Python | 9,576 | Upgrade to Python 3.12 | ### Describe your change:
Repeats #8777
Repeats #9306
* #8777
* #9306
**Disables** algorithms based on `qiskit` and `tensorflow` because those modules are not yet compatible with Python 3.12.
* https://github.com/tensorflow/tensorflow/releases
* https://github.com/Qiskit/qiskit/issues/10887
---
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
| cclauss | "2023-10-03T08:13:43Z" | "2023-10-03T09:17:11Z" | f964dcbf2ff7c70e4aca20532a38dfb02ce8a4c0 | 0f4e51245f33175b4fb311f633d3821210741bdd | Upgrade to Python 3.12. ### Describe your change:
Repeats #8777
Repeats #9306
* #8777
* #9306
**Disables** algorithms based on `qiskit` and `tensorflow` because those modules are not yet compatible with Python 3.12.
* https://github.com/tensorflow/tensorflow/releases
* https://github.com/Qiskit/qiskit/issues/10887
---
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
| from __future__ import annotations
import random
import string
class ShuffledShiftCipher:
"""
This algorithm uses the Caesar Cipher algorithm but removes the option to
use brute force to decrypt the message.
The passcode is a random password from the selection buffer of
1. uppercase letters of the English alphabet
2. lowercase letters of the English alphabet
3. digits from 0 to 9
Using unique characters from the passcode, the normal list of characters,
that can be allowed in the plaintext, is pivoted and shuffled. Refer to docstring
of __make_key_list() to learn more about the shuffling.
Then, using the passcode, a number is calculated which is used to encrypt the
plaintext message with the normal shift cipher method, only in this case, the
reference, to look back at while decrypting, is shuffled.
Each cipher object can possess an optional argument as passcode, without which a
new passcode is generated for that object automatically.
cip1 = ShuffledShiftCipher('d4usr9TWxw9wMD')
cip2 = ShuffledShiftCipher()
"""
def __init__(self, passcode: str | None = None) -> None:
"""
Initializes a cipher object with a passcode as it's entity
Note: No new passcode is generated if user provides a passcode
while creating the object
"""
self.__passcode = passcode or self.__passcode_creator()
self.__key_list = self.__make_key_list()
self.__shift_key = self.__make_shift_key()
def __str__(self) -> str:
"""
:return: passcode of the cipher object
"""
return "".join(self.__passcode)
def __neg_pos(self, iterlist: list[int]) -> list[int]:
"""
Mutates the list by changing the sign of each alternate element
:param iterlist: takes a list iterable
:return: the mutated list
"""
for i in range(1, len(iterlist), 2):
iterlist[i] *= -1
return iterlist
def __passcode_creator(self) -> list[str]:
"""
Creates a random password from the selection buffer of
1. uppercase letters of the English alphabet
2. lowercase letters of the English alphabet
3. digits from 0 to 9
:rtype: list
:return: a password of a random length between 10 to 20
"""
choices = string.ascii_letters + string.digits
password = [random.choice(choices) for _ in range(random.randint(10, 20))]
return password
def __make_key_list(self) -> list[str]:
"""
Shuffles the ordered character choices by pivoting at breakpoints
Breakpoints are the set of characters in the passcode
eg:
if, ABCDEFGHIJKLMNOPQRSTUVWXYZ are the possible characters
and CAMERA is the passcode
then, breakpoints = [A,C,E,M,R] # sorted set of characters from passcode
shuffled parts: [A,CB,ED,MLKJIHGF,RQPON,ZYXWVUTS]
shuffled __key_list : ACBEDMLKJIHGFRQPONZYXWVUTS
Shuffling only 26 letters of the english alphabet can generate 26!
combinations for the shuffled list. In the program we consider, a set of
97 characters (including letters, digits, punctuation and whitespaces),
thereby creating a possibility of 97! combinations (which is a 152 digit number
in itself), thus diminishing the possibility of a brute force approach.
Moreover, shift keys even introduce a multiple of 26 for a brute force approach
for each of the already 97! combinations.
"""
# key_list_options contain nearly all printable except few elements from
# string.whitespace
key_list_options = (
string.ascii_letters + string.digits + string.punctuation + " \t\n"
)
keys_l = []
# creates points known as breakpoints to break the key_list_options at those
# points and pivot each substring
breakpoints = sorted(set(self.__passcode))
temp_list: list[str] = []
# algorithm for creating a new shuffled list, keys_l, out of key_list_options
for i in key_list_options:
temp_list.extend(i)
# checking breakpoints at which to pivot temporary sublist and add it into
# keys_l
if i in breakpoints or i == key_list_options[-1]:
keys_l.extend(temp_list[::-1])
temp_list.clear()
# returning a shuffled keys_l to prevent brute force guessing of shift key
return keys_l
def __make_shift_key(self) -> int:
"""
sum() of the mutated list of ascii values of all characters where the
mutated list is the one returned by __neg_pos()
"""
num = sum(self.__neg_pos([ord(x) for x in self.__passcode]))
return num if num > 0 else len(self.__passcode)
def decrypt(self, encoded_message: str) -> str:
"""
Performs shifting of the encoded_message w.r.t. the shuffled __key_list
to create the decoded_message
>>> ssc = ShuffledShiftCipher('4PYIXyqeQZr44')
>>> ssc.decrypt("d>**-1z6&'5z'5z:z+-='$'>=zp:>5:#z<'.&>#")
'Hello, this is a modified Caesar cipher'
"""
decoded_message = ""
# decoding shift like Caesar cipher algorithm implementing negative shift or
# reverse shift or left shift
for i in encoded_message:
position = self.__key_list.index(i)
decoded_message += self.__key_list[
(position - self.__shift_key) % -len(self.__key_list)
]
return decoded_message
def encrypt(self, plaintext: str) -> str:
"""
Performs shifting of the plaintext w.r.t. the shuffled __key_list
to create the encoded_message
>>> ssc = ShuffledShiftCipher('4PYIXyqeQZr44')
>>> ssc.encrypt('Hello, this is a modified Caesar cipher')
"d>**-1z6&'5z'5z:z+-='$'>=zp:>5:#z<'.&>#"
"""
encoded_message = ""
# encoding shift like Caesar cipher algorithm implementing positive shift or
# forward shift or right shift
for i in plaintext:
position = self.__key_list.index(i)
encoded_message += self.__key_list[
(position + self.__shift_key) % len(self.__key_list)
]
return encoded_message
def test_end_to_end(msg: str = "Hello, this is a modified Caesar cipher") -> str:
"""
>>> test_end_to_end()
'Hello, this is a modified Caesar cipher'
"""
cip1 = ShuffledShiftCipher()
return cip1.decrypt(cip1.encrypt(msg))
if __name__ == "__main__":
import doctest
doctest.testmod()
| from __future__ import annotations
import random
import string
class ShuffledShiftCipher:
"""
This algorithm uses the Caesar Cipher algorithm but removes the option to
use brute force to decrypt the message.
The passcode is a random password from the selection buffer of
1. uppercase letters of the English alphabet
2. lowercase letters of the English alphabet
3. digits from 0 to 9
Using unique characters from the passcode, the normal list of characters,
that can be allowed in the plaintext, is pivoted and shuffled. Refer to docstring
of __make_key_list() to learn more about the shuffling.
Then, using the passcode, a number is calculated which is used to encrypt the
plaintext message with the normal shift cipher method, only in this case, the
reference, to look back at while decrypting, is shuffled.
Each cipher object can possess an optional argument as passcode, without which a
new passcode is generated for that object automatically.
cip1 = ShuffledShiftCipher('d4usr9TWxw9wMD')
cip2 = ShuffledShiftCipher()
"""
def __init__(self, passcode: str | None = None) -> None:
"""
Initializes a cipher object with a passcode as it's entity
Note: No new passcode is generated if user provides a passcode
while creating the object
"""
self.__passcode = passcode or self.__passcode_creator()
self.__key_list = self.__make_key_list()
self.__shift_key = self.__make_shift_key()
def __str__(self) -> str:
"""
:return: passcode of the cipher object
"""
return "".join(self.__passcode)
def __neg_pos(self, iterlist: list[int]) -> list[int]:
"""
Mutates the list by changing the sign of each alternate element
:param iterlist: takes a list iterable
:return: the mutated list
"""
for i in range(1, len(iterlist), 2):
iterlist[i] *= -1
return iterlist
def __passcode_creator(self) -> list[str]:
"""
Creates a random password from the selection buffer of
1. uppercase letters of the English alphabet
2. lowercase letters of the English alphabet
3. digits from 0 to 9
:rtype: list
:return: a password of a random length between 10 to 20
"""
choices = string.ascii_letters + string.digits
password = [random.choice(choices) for _ in range(random.randint(10, 20))]
return password
def __make_key_list(self) -> list[str]:
"""
Shuffles the ordered character choices by pivoting at breakpoints
Breakpoints are the set of characters in the passcode
eg:
if, ABCDEFGHIJKLMNOPQRSTUVWXYZ are the possible characters
and CAMERA is the passcode
then, breakpoints = [A,C,E,M,R] # sorted set of characters from passcode
shuffled parts: [A,CB,ED,MLKJIHGF,RQPON,ZYXWVUTS]
shuffled __key_list : ACBEDMLKJIHGFRQPONZYXWVUTS
Shuffling only 26 letters of the english alphabet can generate 26!
combinations for the shuffled list. In the program we consider, a set of
97 characters (including letters, digits, punctuation and whitespaces),
thereby creating a possibility of 97! combinations (which is a 152 digit number
in itself), thus diminishing the possibility of a brute force approach.
Moreover, shift keys even introduce a multiple of 26 for a brute force approach
for each of the already 97! combinations.
"""
# key_list_options contain nearly all printable except few elements from
# string.whitespace
key_list_options = (
string.ascii_letters + string.digits + string.punctuation + " \t\n"
)
keys_l = []
# creates points known as breakpoints to break the key_list_options at those
# points and pivot each substring
breakpoints = sorted(set(self.__passcode))
temp_list: list[str] = []
# algorithm for creating a new shuffled list, keys_l, out of key_list_options
for i in key_list_options:
temp_list.extend(i)
# checking breakpoints at which to pivot temporary sublist and add it into
# keys_l
if i in breakpoints or i == key_list_options[-1]:
keys_l.extend(temp_list[::-1])
temp_list.clear()
# returning a shuffled keys_l to prevent brute force guessing of shift key
return keys_l
def __make_shift_key(self) -> int:
"""
sum() of the mutated list of ascii values of all characters where the
mutated list is the one returned by __neg_pos()
"""
num = sum(self.__neg_pos([ord(x) for x in self.__passcode]))
return num if num > 0 else len(self.__passcode)
def decrypt(self, encoded_message: str) -> str:
"""
Performs shifting of the encoded_message w.r.t. the shuffled __key_list
to create the decoded_message
>>> ssc = ShuffledShiftCipher('4PYIXyqeQZr44')
>>> ssc.decrypt("d>**-1z6&'5z'5z:z+-='$'>=zp:>5:#z<'.&>#")
'Hello, this is a modified Caesar cipher'
"""
decoded_message = ""
# decoding shift like Caesar cipher algorithm implementing negative shift or
# reverse shift or left shift
for i in encoded_message:
position = self.__key_list.index(i)
decoded_message += self.__key_list[
(position - self.__shift_key) % -len(self.__key_list)
]
return decoded_message
def encrypt(self, plaintext: str) -> str:
"""
Performs shifting of the plaintext w.r.t. the shuffled __key_list
to create the encoded_message
>>> ssc = ShuffledShiftCipher('4PYIXyqeQZr44')
>>> ssc.encrypt('Hello, this is a modified Caesar cipher')
"d>**-1z6&'5z'5z:z+-='$'>=zp:>5:#z<'.&>#"
"""
encoded_message = ""
# encoding shift like Caesar cipher algorithm implementing positive shift or
# forward shift or right shift
for i in plaintext:
position = self.__key_list.index(i)
encoded_message += self.__key_list[
(position + self.__shift_key) % len(self.__key_list)
]
return encoded_message
def test_end_to_end(msg: str = "Hello, this is a modified Caesar cipher") -> str:
"""
>>> test_end_to_end()
'Hello, this is a modified Caesar cipher'
"""
cip1 = ShuffledShiftCipher()
return cip1.decrypt(cip1.encrypt(msg))
if __name__ == "__main__":
import doctest
doctest.testmod()
| -1 |
TheAlgorithms/Python | 9,576 | Upgrade to Python 3.12 | ### Describe your change:
Repeats #8777
Repeats #9306
* #8777
* #9306
**Disables** algorithms based on `qiskit` and `tensorflow` because those modules are not yet compatible with Python 3.12.
* https://github.com/tensorflow/tensorflow/releases
* https://github.com/Qiskit/qiskit/issues/10887
---
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
| cclauss | "2023-10-03T08:13:43Z" | "2023-10-03T09:17:11Z" | f964dcbf2ff7c70e4aca20532a38dfb02ce8a4c0 | 0f4e51245f33175b4fb311f633d3821210741bdd | Upgrade to Python 3.12. ### Describe your change:
Repeats #8777
Repeats #9306
* #8777
* #9306
**Disables** algorithms based on `qiskit` and `tensorflow` because those modules are not yet compatible with Python 3.12.
* https://github.com/tensorflow/tensorflow/releases
* https://github.com/Qiskit/qiskit/issues/10887
---
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
| """
- A linked list is similar to an array, it holds values. However, links in a linked
list do not have indexes.
- This is an example of a double ended, doubly linked list.
- Each link references the next link and the previous one.
- A Doubly Linked List (DLL) contains an extra pointer, typically called previous
pointer, together with next pointer and data which are there in singly linked list.
- Advantages over SLL - It can be traversed in both forward and backward direction.
Delete operation is more efficient
"""
class Node:
def __init__(self, data: int, previous=None, next_node=None):
self.data = data
self.previous = previous
self.next = next_node
def __str__(self) -> str:
return f"{self.data}"
def get_data(self) -> int:
return self.data
def get_next(self):
return self.next
def get_previous(self):
return self.previous
class LinkedListIterator:
def __init__(self, head):
self.current = head
def __iter__(self):
return self
def __next__(self):
if not self.current:
raise StopIteration
else:
value = self.current.get_data()
self.current = self.current.get_next()
return value
class LinkedList:
def __init__(self):
self.head = None # First node in list
self.tail = None # Last node in list
def __str__(self):
current = self.head
nodes = []
while current is not None:
nodes.append(current.get_data())
current = current.get_next()
return " ".join(str(node) for node in nodes)
def __contains__(self, value: int):
current = self.head
while current:
if current.get_data() == value:
return True
current = current.get_next()
return False
def __iter__(self):
return LinkedListIterator(self.head)
def get_head_data(self):
if self.head:
return self.head.get_data()
return None
def get_tail_data(self):
if self.tail:
return self.tail.get_data()
return None
def set_head(self, node: Node) -> None:
if self.head is None:
self.head = node
self.tail = node
else:
self.insert_before_node(self.head, node)
def set_tail(self, node: Node) -> None:
if self.head is None:
self.set_head(node)
else:
self.insert_after_node(self.tail, node)
def insert(self, value: int) -> None:
node = Node(value)
if self.head is None:
self.set_head(node)
else:
self.set_tail(node)
def insert_before_node(self, node: Node, node_to_insert: Node) -> None:
node_to_insert.next = node
node_to_insert.previous = node.previous
if node.get_previous() is None:
self.head = node_to_insert
else:
node.previous.next = node_to_insert
node.previous = node_to_insert
def insert_after_node(self, node: Node, node_to_insert: Node) -> None:
node_to_insert.previous = node
node_to_insert.next = node.next
if node.get_next() is None:
self.tail = node_to_insert
else:
node.next.previous = node_to_insert
node.next = node_to_insert
def insert_at_position(self, position: int, value: int) -> None:
current_position = 1
new_node = Node(value)
node = self.head
while node:
if current_position == position:
self.insert_before_node(node, new_node)
return
current_position += 1
node = node.next
self.insert_after_node(self.tail, new_node)
def get_node(self, item: int) -> Node:
node = self.head
while node:
if node.get_data() == item:
return node
node = node.get_next()
raise Exception("Node not found")
def delete_value(self, value):
if (node := self.get_node(value)) is not None:
if node == self.head:
self.head = self.head.get_next()
if node == self.tail:
self.tail = self.tail.get_previous()
self.remove_node_pointers(node)
@staticmethod
def remove_node_pointers(node: Node) -> None:
if node.get_next():
node.next.previous = node.previous
if node.get_previous():
node.previous.next = node.next
node.next = None
node.previous = None
def is_empty(self):
return self.head is None
def create_linked_list() -> None:
"""
>>> new_linked_list = LinkedList()
>>> new_linked_list.get_head_data() is None
True
>>> new_linked_list.get_tail_data() is None
True
>>> new_linked_list.is_empty()
True
>>> new_linked_list.insert(10)
>>> new_linked_list.get_head_data()
10
>>> new_linked_list.get_tail_data()
10
>>> new_linked_list.insert_at_position(position=3, value=20)
>>> new_linked_list.get_head_data()
10
>>> new_linked_list.get_tail_data()
20
>>> new_linked_list.set_head(Node(1000))
>>> new_linked_list.get_head_data()
1000
>>> new_linked_list.get_tail_data()
20
>>> new_linked_list.set_tail(Node(2000))
>>> new_linked_list.get_head_data()
1000
>>> new_linked_list.get_tail_data()
2000
>>> for value in new_linked_list:
... print(value)
1000
10
20
2000
>>> new_linked_list.is_empty()
False
>>> for value in new_linked_list:
... print(value)
1000
10
20
2000
>>> 10 in new_linked_list
True
>>> new_linked_list.delete_value(value=10)
>>> 10 in new_linked_list
False
>>> new_linked_list.delete_value(value=2000)
>>> new_linked_list.get_tail_data()
20
>>> new_linked_list.delete_value(value=1000)
>>> new_linked_list.get_tail_data()
20
>>> new_linked_list.get_head_data()
20
>>> for value in new_linked_list:
... print(value)
20
>>> new_linked_list.delete_value(value=20)
>>> for value in new_linked_list:
... print(value)
>>> for value in range(1,10):
... new_linked_list.insert(value=value)
>>> for value in new_linked_list:
... print(value)
1
2
3
4
5
6
7
8
9
"""
if __name__ == "__main__":
import doctest
doctest.testmod()
| """
- A linked list is similar to an array, it holds values. However, links in a linked
list do not have indexes.
- This is an example of a double ended, doubly linked list.
- Each link references the next link and the previous one.
- A Doubly Linked List (DLL) contains an extra pointer, typically called previous
pointer, together with next pointer and data which are there in singly linked list.
- Advantages over SLL - It can be traversed in both forward and backward direction.
Delete operation is more efficient
"""
class Node:
def __init__(self, data: int, previous=None, next_node=None):
self.data = data
self.previous = previous
self.next = next_node
def __str__(self) -> str:
return f"{self.data}"
def get_data(self) -> int:
return self.data
def get_next(self):
return self.next
def get_previous(self):
return self.previous
class LinkedListIterator:
def __init__(self, head):
self.current = head
def __iter__(self):
return self
def __next__(self):
if not self.current:
raise StopIteration
else:
value = self.current.get_data()
self.current = self.current.get_next()
return value
class LinkedList:
def __init__(self):
self.head = None # First node in list
self.tail = None # Last node in list
def __str__(self):
current = self.head
nodes = []
while current is not None:
nodes.append(current.get_data())
current = current.get_next()
return " ".join(str(node) for node in nodes)
def __contains__(self, value: int):
current = self.head
while current:
if current.get_data() == value:
return True
current = current.get_next()
return False
def __iter__(self):
return LinkedListIterator(self.head)
def get_head_data(self):
if self.head:
return self.head.get_data()
return None
def get_tail_data(self):
if self.tail:
return self.tail.get_data()
return None
def set_head(self, node: Node) -> None:
if self.head is None:
self.head = node
self.tail = node
else:
self.insert_before_node(self.head, node)
def set_tail(self, node: Node) -> None:
if self.head is None:
self.set_head(node)
else:
self.insert_after_node(self.tail, node)
def insert(self, value: int) -> None:
node = Node(value)
if self.head is None:
self.set_head(node)
else:
self.set_tail(node)
def insert_before_node(self, node: Node, node_to_insert: Node) -> None:
node_to_insert.next = node
node_to_insert.previous = node.previous
if node.get_previous() is None:
self.head = node_to_insert
else:
node.previous.next = node_to_insert
node.previous = node_to_insert
def insert_after_node(self, node: Node, node_to_insert: Node) -> None:
node_to_insert.previous = node
node_to_insert.next = node.next
if node.get_next() is None:
self.tail = node_to_insert
else:
node.next.previous = node_to_insert
node.next = node_to_insert
def insert_at_position(self, position: int, value: int) -> None:
current_position = 1
new_node = Node(value)
node = self.head
while node:
if current_position == position:
self.insert_before_node(node, new_node)
return
current_position += 1
node = node.next
self.insert_after_node(self.tail, new_node)
def get_node(self, item: int) -> Node:
node = self.head
while node:
if node.get_data() == item:
return node
node = node.get_next()
raise Exception("Node not found")
def delete_value(self, value):
if (node := self.get_node(value)) is not None:
if node == self.head:
self.head = self.head.get_next()
if node == self.tail:
self.tail = self.tail.get_previous()
self.remove_node_pointers(node)
@staticmethod
def remove_node_pointers(node: Node) -> None:
if node.get_next():
node.next.previous = node.previous
if node.get_previous():
node.previous.next = node.next
node.next = None
node.previous = None
def is_empty(self):
return self.head is None
def create_linked_list() -> None:
"""
>>> new_linked_list = LinkedList()
>>> new_linked_list.get_head_data() is None
True
>>> new_linked_list.get_tail_data() is None
True
>>> new_linked_list.is_empty()
True
>>> new_linked_list.insert(10)
>>> new_linked_list.get_head_data()
10
>>> new_linked_list.get_tail_data()
10
>>> new_linked_list.insert_at_position(position=3, value=20)
>>> new_linked_list.get_head_data()
10
>>> new_linked_list.get_tail_data()
20
>>> new_linked_list.set_head(Node(1000))
>>> new_linked_list.get_head_data()
1000
>>> new_linked_list.get_tail_data()
20
>>> new_linked_list.set_tail(Node(2000))
>>> new_linked_list.get_head_data()
1000
>>> new_linked_list.get_tail_data()
2000
>>> for value in new_linked_list:
... print(value)
1000
10
20
2000
>>> new_linked_list.is_empty()
False
>>> for value in new_linked_list:
... print(value)
1000
10
20
2000
>>> 10 in new_linked_list
True
>>> new_linked_list.delete_value(value=10)
>>> 10 in new_linked_list
False
>>> new_linked_list.delete_value(value=2000)
>>> new_linked_list.get_tail_data()
20
>>> new_linked_list.delete_value(value=1000)
>>> new_linked_list.get_tail_data()
20
>>> new_linked_list.get_head_data()
20
>>> for value in new_linked_list:
... print(value)
20
>>> new_linked_list.delete_value(value=20)
>>> for value in new_linked_list:
... print(value)
>>> for value in range(1,10):
... new_linked_list.insert(value=value)
>>> for value in new_linked_list:
... print(value)
1
2
3
4
5
6
7
8
9
"""
if __name__ == "__main__":
import doctest
doctest.testmod()
| -1 |
TheAlgorithms/Python | 9,576 | Upgrade to Python 3.12 | ### Describe your change:
Repeats #8777
Repeats #9306
* #8777
* #9306
**Disables** algorithms based on `qiskit` and `tensorflow` because those modules are not yet compatible with Python 3.12.
* https://github.com/tensorflow/tensorflow/releases
* https://github.com/Qiskit/qiskit/issues/10887
---
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
| cclauss | "2023-10-03T08:13:43Z" | "2023-10-03T09:17:11Z" | f964dcbf2ff7c70e4aca20532a38dfb02ce8a4c0 | 0f4e51245f33175b4fb311f633d3821210741bdd | Upgrade to Python 3.12. ### Describe your change:
Repeats #8777
Repeats #9306
* #8777
* #9306
**Disables** algorithms based on `qiskit` and `tensorflow` because those modules are not yet compatible with Python 3.12.
* https://github.com/tensorflow/tensorflow/releases
* https://github.com/Qiskit/qiskit/issues/10887
---
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
| def is_int_palindrome(num: int) -> bool:
"""
Returns whether `num` is a palindrome or not
(see for reference https://en.wikipedia.org/wiki/Palindromic_number).
>>> is_int_palindrome(-121)
False
>>> is_int_palindrome(0)
True
>>> is_int_palindrome(10)
False
>>> is_int_palindrome(11)
True
>>> is_int_palindrome(101)
True
>>> is_int_palindrome(120)
False
"""
if num < 0:
return False
num_copy: int = num
rev_num: int = 0
while num > 0:
rev_num = rev_num * 10 + (num % 10)
num //= 10
return num_copy == rev_num
if __name__ == "__main__":
import doctest
doctest.testmod()
| def is_int_palindrome(num: int) -> bool:
"""
Returns whether `num` is a palindrome or not
(see for reference https://en.wikipedia.org/wiki/Palindromic_number).
>>> is_int_palindrome(-121)
False
>>> is_int_palindrome(0)
True
>>> is_int_palindrome(10)
False
>>> is_int_palindrome(11)
True
>>> is_int_palindrome(101)
True
>>> is_int_palindrome(120)
False
"""
if num < 0:
return False
num_copy: int = num
rev_num: int = 0
while num > 0:
rev_num = rev_num * 10 + (num % 10)
num //= 10
return num_copy == rev_num
if __name__ == "__main__":
import doctest
doctest.testmod()
| -1 |
TheAlgorithms/Python | 9,576 | Upgrade to Python 3.12 | ### Describe your change:
Repeats #8777
Repeats #9306
* #8777
* #9306
**Disables** algorithms based on `qiskit` and `tensorflow` because those modules are not yet compatible with Python 3.12.
* https://github.com/tensorflow/tensorflow/releases
* https://github.com/Qiskit/qiskit/issues/10887
---
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
| cclauss | "2023-10-03T08:13:43Z" | "2023-10-03T09:17:11Z" | f964dcbf2ff7c70e4aca20532a38dfb02ce8a4c0 | 0f4e51245f33175b4fb311f633d3821210741bdd | Upgrade to Python 3.12. ### Describe your change:
Repeats #8777
Repeats #9306
* #8777
* #9306
**Disables** algorithms based on `qiskit` and `tensorflow` because those modules are not yet compatible with Python 3.12.
* https://github.com/tensorflow/tensorflow/releases
* https://github.com/Qiskit/qiskit/issues/10887
---
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
| -1 |
||
TheAlgorithms/Python | 9,576 | Upgrade to Python 3.12 | ### Describe your change:
Repeats #8777
Repeats #9306
* #8777
* #9306
**Disables** algorithms based on `qiskit` and `tensorflow` because those modules are not yet compatible with Python 3.12.
* https://github.com/tensorflow/tensorflow/releases
* https://github.com/Qiskit/qiskit/issues/10887
---
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
| cclauss | "2023-10-03T08:13:43Z" | "2023-10-03T09:17:11Z" | f964dcbf2ff7c70e4aca20532a38dfb02ce8a4c0 | 0f4e51245f33175b4fb311f633d3821210741bdd | Upgrade to Python 3.12. ### Describe your change:
Repeats #8777
Repeats #9306
* #8777
* #9306
**Disables** algorithms based on `qiskit` and `tensorflow` because those modules are not yet compatible with Python 3.12.
* https://github.com/tensorflow/tensorflow/releases
* https://github.com/Qiskit/qiskit/issues/10887
---
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
| import math
from timeit import timeit
def num_digits(n: int) -> int:
"""
Find the number of digits in a number.
>>> num_digits(12345)
5
>>> num_digits(123)
3
>>> num_digits(0)
1
>>> num_digits(-1)
1
>>> num_digits(-123456)
6
"""
digits = 0
n = abs(n)
while True:
n = n // 10
digits += 1
if n == 0:
break
return digits
def num_digits_fast(n: int) -> int:
"""
Find the number of digits in a number.
abs() is used as logarithm for negative numbers is not defined.
>>> num_digits_fast(12345)
5
>>> num_digits_fast(123)
3
>>> num_digits_fast(0)
1
>>> num_digits_fast(-1)
1
>>> num_digits_fast(-123456)
6
"""
return 1 if n == 0 else math.floor(math.log(abs(n), 10) + 1)
def num_digits_faster(n: int) -> int:
"""
Find the number of digits in a number.
abs() is used for negative numbers
>>> num_digits_faster(12345)
5
>>> num_digits_faster(123)
3
>>> num_digits_faster(0)
1
>>> num_digits_faster(-1)
1
>>> num_digits_faster(-123456)
6
"""
return len(str(abs(n)))
def benchmark() -> None:
"""
Benchmark multiple functions, with three different length int values.
"""
from collections.abc import Callable
def benchmark_a_function(func: Callable, value: int) -> None:
call = f"{func.__name__}({value})"
timing = timeit(f"__main__.{call}", setup="import __main__")
print(f"{call}: {func(value)} -- {timing} seconds")
for value in (262144, 1125899906842624, 1267650600228229401496703205376):
for func in (num_digits, num_digits_fast, num_digits_faster):
benchmark_a_function(func, value)
print()
if __name__ == "__main__":
import doctest
doctest.testmod()
benchmark()
| import math
from timeit import timeit
def num_digits(n: int) -> int:
"""
Find the number of digits in a number.
>>> num_digits(12345)
5
>>> num_digits(123)
3
>>> num_digits(0)
1
>>> num_digits(-1)
1
>>> num_digits(-123456)
6
"""
digits = 0
n = abs(n)
while True:
n = n // 10
digits += 1
if n == 0:
break
return digits
def num_digits_fast(n: int) -> int:
"""
Find the number of digits in a number.
abs() is used as logarithm for negative numbers is not defined.
>>> num_digits_fast(12345)
5
>>> num_digits_fast(123)
3
>>> num_digits_fast(0)
1
>>> num_digits_fast(-1)
1
>>> num_digits_fast(-123456)
6
"""
return 1 if n == 0 else math.floor(math.log(abs(n), 10) + 1)
def num_digits_faster(n: int) -> int:
"""
Find the number of digits in a number.
abs() is used for negative numbers
>>> num_digits_faster(12345)
5
>>> num_digits_faster(123)
3
>>> num_digits_faster(0)
1
>>> num_digits_faster(-1)
1
>>> num_digits_faster(-123456)
6
"""
return len(str(abs(n)))
def benchmark() -> None:
"""
Benchmark multiple functions, with three different length int values.
"""
from collections.abc import Callable
def benchmark_a_function(func: Callable, value: int) -> None:
call = f"{func.__name__}({value})"
timing = timeit(f"__main__.{call}", setup="import __main__")
print(f"{call}: {func(value)} -- {timing} seconds")
for value in (262144, 1125899906842624, 1267650600228229401496703205376):
for func in (num_digits, num_digits_fast, num_digits_faster):
benchmark_a_function(func, value)
print()
if __name__ == "__main__":
import doctest
doctest.testmod()
benchmark()
| -1 |
TheAlgorithms/Python | 9,576 | Upgrade to Python 3.12 | ### Describe your change:
Repeats #8777
Repeats #9306
* #8777
* #9306
**Disables** algorithms based on `qiskit` and `tensorflow` because those modules are not yet compatible with Python 3.12.
* https://github.com/tensorflow/tensorflow/releases
* https://github.com/Qiskit/qiskit/issues/10887
---
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
| cclauss | "2023-10-03T08:13:43Z" | "2023-10-03T09:17:11Z" | f964dcbf2ff7c70e4aca20532a38dfb02ce8a4c0 | 0f4e51245f33175b4fb311f633d3821210741bdd | Upgrade to Python 3.12. ### Describe your change:
Repeats #8777
Repeats #9306
* #8777
* #9306
**Disables** algorithms based on `qiskit` and `tensorflow` because those modules are not yet compatible with Python 3.12.
* https://github.com/tensorflow/tensorflow/releases
* https://github.com/Qiskit/qiskit/issues/10887
---
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
| """
Greatest Common Divisor.
Wikipedia reference: https://en.wikipedia.org/wiki/Greatest_common_divisor
gcd(a, b) = gcd(a, -b) = gcd(-a, b) = gcd(-a, -b) by definition of divisibility
"""
def greatest_common_divisor(a: int, b: int) -> int:
"""
Calculate Greatest Common Divisor (GCD).
>>> greatest_common_divisor(24, 40)
8
>>> greatest_common_divisor(1, 1)
1
>>> greatest_common_divisor(1, 800)
1
>>> greatest_common_divisor(11, 37)
1
>>> greatest_common_divisor(3, 5)
1
>>> greatest_common_divisor(16, 4)
4
>>> greatest_common_divisor(-3, 9)
3
>>> greatest_common_divisor(9, -3)
3
>>> greatest_common_divisor(3, -9)
3
>>> greatest_common_divisor(-3, -9)
3
"""
return abs(b) if a == 0 else greatest_common_divisor(b % a, a)
def gcd_by_iterative(x: int, y: int) -> int:
"""
Below method is more memory efficient because it does not create additional
stack frames for recursive functions calls (as done in the above method).
>>> gcd_by_iterative(24, 40)
8
>>> greatest_common_divisor(24, 40) == gcd_by_iterative(24, 40)
True
>>> gcd_by_iterative(-3, -9)
3
>>> gcd_by_iterative(3, -9)
3
>>> gcd_by_iterative(1, -800)
1
>>> gcd_by_iterative(11, 37)
1
"""
while y: # --> when y=0 then loop will terminate and return x as final GCD.
x, y = y, x % y
return abs(x)
def main():
"""
Call Greatest Common Divisor function.
"""
try:
nums = input("Enter two integers separated by comma (,): ").split(",")
num_1 = int(nums[0])
num_2 = int(nums[1])
print(
f"greatest_common_divisor({num_1}, {num_2}) = "
f"{greatest_common_divisor(num_1, num_2)}"
)
print(f"By iterative gcd({num_1}, {num_2}) = {gcd_by_iterative(num_1, num_2)}")
except (IndexError, UnboundLocalError, ValueError):
print("Wrong input")
if __name__ == "__main__":
main()
| """
Greatest Common Divisor.
Wikipedia reference: https://en.wikipedia.org/wiki/Greatest_common_divisor
gcd(a, b) = gcd(a, -b) = gcd(-a, b) = gcd(-a, -b) by definition of divisibility
"""
def greatest_common_divisor(a: int, b: int) -> int:
"""
Calculate Greatest Common Divisor (GCD).
>>> greatest_common_divisor(24, 40)
8
>>> greatest_common_divisor(1, 1)
1
>>> greatest_common_divisor(1, 800)
1
>>> greatest_common_divisor(11, 37)
1
>>> greatest_common_divisor(3, 5)
1
>>> greatest_common_divisor(16, 4)
4
>>> greatest_common_divisor(-3, 9)
3
>>> greatest_common_divisor(9, -3)
3
>>> greatest_common_divisor(3, -9)
3
>>> greatest_common_divisor(-3, -9)
3
"""
return abs(b) if a == 0 else greatest_common_divisor(b % a, a)
def gcd_by_iterative(x: int, y: int) -> int:
"""
Below method is more memory efficient because it does not create additional
stack frames for recursive functions calls (as done in the above method).
>>> gcd_by_iterative(24, 40)
8
>>> greatest_common_divisor(24, 40) == gcd_by_iterative(24, 40)
True
>>> gcd_by_iterative(-3, -9)
3
>>> gcd_by_iterative(3, -9)
3
>>> gcd_by_iterative(1, -800)
1
>>> gcd_by_iterative(11, 37)
1
"""
while y: # --> when y=0 then loop will terminate and return x as final GCD.
x, y = y, x % y
return abs(x)
def main():
"""
Call Greatest Common Divisor function.
"""
try:
nums = input("Enter two integers separated by comma (,): ").split(",")
num_1 = int(nums[0])
num_2 = int(nums[1])
print(
f"greatest_common_divisor({num_1}, {num_2}) = "
f"{greatest_common_divisor(num_1, num_2)}"
)
print(f"By iterative gcd({num_1}, {num_2}) = {gcd_by_iterative(num_1, num_2)}")
except (IndexError, UnboundLocalError, ValueError):
print("Wrong input")
if __name__ == "__main__":
main()
| -1 |
TheAlgorithms/Python | 9,576 | Upgrade to Python 3.12 | ### Describe your change:
Repeats #8777
Repeats #9306
* #8777
* #9306
**Disables** algorithms based on `qiskit` and `tensorflow` because those modules are not yet compatible with Python 3.12.
* https://github.com/tensorflow/tensorflow/releases
* https://github.com/Qiskit/qiskit/issues/10887
---
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
| cclauss | "2023-10-03T08:13:43Z" | "2023-10-03T09:17:11Z" | f964dcbf2ff7c70e4aca20532a38dfb02ce8a4c0 | 0f4e51245f33175b4fb311f633d3821210741bdd | Upgrade to Python 3.12. ### Describe your change:
Repeats #8777
Repeats #9306
* #8777
* #9306
**Disables** algorithms based on `qiskit` and `tensorflow` because those modules are not yet compatible with Python 3.12.
* https://github.com/tensorflow/tensorflow/releases
* https://github.com/Qiskit/qiskit/issues/10887
---
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
| # https://en.wikipedia.org/wiki/Simulated_annealing
import math
import random
from typing import Any
from .hill_climbing import SearchProblem
def simulated_annealing(
search_prob,
find_max: bool = True,
max_x: float = math.inf,
min_x: float = -math.inf,
max_y: float = math.inf,
min_y: float = -math.inf,
visualization: bool = False,
start_temperate: float = 100,
rate_of_decrease: float = 0.01,
threshold_temp: float = 1,
) -> Any:
"""
Implementation of the simulated annealing algorithm. We start with a given state,
find all its neighbors. Pick a random neighbor, if that neighbor improves the
solution, we move in that direction, if that neighbor does not improve the solution,
we generate a random real number between 0 and 1, if the number is within a certain
range (calculated using temperature) we move in that direction, else we pick
another neighbor randomly and repeat the process.
Args:
search_prob: The search state at the start.
find_max: If True, the algorithm should find the minimum else the minimum.
max_x, min_x, max_y, min_y: the maximum and minimum bounds of x and y.
visualization: If True, a matplotlib graph is displayed.
start_temperate: the initial temperate of the system when the program starts.
rate_of_decrease: the rate at which the temperate decreases in each iteration.
threshold_temp: the threshold temperature below which we end the search
Returns a search state having the maximum (or minimum) score.
"""
search_end = False
current_state = search_prob
current_temp = start_temperate
scores = []
iterations = 0
best_state = None
while not search_end:
current_score = current_state.score()
if best_state is None or current_score > best_state.score():
best_state = current_state
scores.append(current_score)
iterations += 1
next_state = None
neighbors = current_state.get_neighbors()
while (
next_state is None and neighbors
): # till we do not find a neighbor that we can move to
index = random.randint(0, len(neighbors) - 1) # picking a random neighbor
picked_neighbor = neighbors.pop(index)
change = picked_neighbor.score() - current_score
if (
picked_neighbor.x > max_x
or picked_neighbor.x < min_x
or picked_neighbor.y > max_y
or picked_neighbor.y < min_y
):
continue # neighbor outside our bounds
if not find_max:
change = change * -1 # in case we are finding minimum
if change > 0: # improves the solution
next_state = picked_neighbor
else:
probability = (math.e) ** (
change / current_temp
) # probability generation function
if random.random() < probability: # random number within probability
next_state = picked_neighbor
current_temp = current_temp - (current_temp * rate_of_decrease)
if current_temp < threshold_temp or next_state is None:
# temperature below threshold, or could not find a suitable neighbor
search_end = True
else:
current_state = next_state
if visualization:
from matplotlib import pyplot as plt
plt.plot(range(iterations), scores)
plt.xlabel("Iterations")
plt.ylabel("Function values")
plt.show()
return best_state
if __name__ == "__main__":
def test_f1(x, y):
return (x**2) + (y**2)
# starting the problem with initial coordinates (12, 47)
prob = SearchProblem(x=12, y=47, step_size=1, function_to_optimize=test_f1)
local_min = simulated_annealing(
prob, find_max=False, max_x=100, min_x=5, max_y=50, min_y=-5, visualization=True
)
print(
"The minimum score for f(x, y) = x^2 + y^2 with the domain 100 > x > 5 "
f"and 50 > y > - 5 found via hill climbing: {local_min.score()}"
)
# starting the problem with initial coordinates (12, 47)
prob = SearchProblem(x=12, y=47, step_size=1, function_to_optimize=test_f1)
local_min = simulated_annealing(
prob, find_max=True, max_x=100, min_x=5, max_y=50, min_y=-5, visualization=True
)
print(
"The maximum score for f(x, y) = x^2 + y^2 with the domain 100 > x > 5 "
f"and 50 > y > - 5 found via hill climbing: {local_min.score()}"
)
def test_f2(x, y):
return (3 * x**2) - (6 * y)
prob = SearchProblem(x=3, y=4, step_size=1, function_to_optimize=test_f1)
local_min = simulated_annealing(prob, find_max=False, visualization=True)
print(
"The minimum score for f(x, y) = 3*x^2 - 6*y found via hill climbing: "
f"{local_min.score()}"
)
prob = SearchProblem(x=3, y=4, step_size=1, function_to_optimize=test_f1)
local_min = simulated_annealing(prob, find_max=True, visualization=True)
print(
"The maximum score for f(x, y) = 3*x^2 - 6*y found via hill climbing: "
f"{local_min.score()}"
)
| # https://en.wikipedia.org/wiki/Simulated_annealing
import math
import random
from typing import Any
from .hill_climbing import SearchProblem
def simulated_annealing(
search_prob,
find_max: bool = True,
max_x: float = math.inf,
min_x: float = -math.inf,
max_y: float = math.inf,
min_y: float = -math.inf,
visualization: bool = False,
start_temperate: float = 100,
rate_of_decrease: float = 0.01,
threshold_temp: float = 1,
) -> Any:
"""
Implementation of the simulated annealing algorithm. We start with a given state,
find all its neighbors. Pick a random neighbor, if that neighbor improves the
solution, we move in that direction, if that neighbor does not improve the solution,
we generate a random real number between 0 and 1, if the number is within a certain
range (calculated using temperature) we move in that direction, else we pick
another neighbor randomly and repeat the process.
Args:
search_prob: The search state at the start.
find_max: If True, the algorithm should find the minimum else the minimum.
max_x, min_x, max_y, min_y: the maximum and minimum bounds of x and y.
visualization: If True, a matplotlib graph is displayed.
start_temperate: the initial temperate of the system when the program starts.
rate_of_decrease: the rate at which the temperate decreases in each iteration.
threshold_temp: the threshold temperature below which we end the search
Returns a search state having the maximum (or minimum) score.
"""
search_end = False
current_state = search_prob
current_temp = start_temperate
scores = []
iterations = 0
best_state = None
while not search_end:
current_score = current_state.score()
if best_state is None or current_score > best_state.score():
best_state = current_state
scores.append(current_score)
iterations += 1
next_state = None
neighbors = current_state.get_neighbors()
while (
next_state is None and neighbors
): # till we do not find a neighbor that we can move to
index = random.randint(0, len(neighbors) - 1) # picking a random neighbor
picked_neighbor = neighbors.pop(index)
change = picked_neighbor.score() - current_score
if (
picked_neighbor.x > max_x
or picked_neighbor.x < min_x
or picked_neighbor.y > max_y
or picked_neighbor.y < min_y
):
continue # neighbor outside our bounds
if not find_max:
change = change * -1 # in case we are finding minimum
if change > 0: # improves the solution
next_state = picked_neighbor
else:
probability = (math.e) ** (
change / current_temp
) # probability generation function
if random.random() < probability: # random number within probability
next_state = picked_neighbor
current_temp = current_temp - (current_temp * rate_of_decrease)
if current_temp < threshold_temp or next_state is None:
# temperature below threshold, or could not find a suitable neighbor
search_end = True
else:
current_state = next_state
if visualization:
from matplotlib import pyplot as plt
plt.plot(range(iterations), scores)
plt.xlabel("Iterations")
plt.ylabel("Function values")
plt.show()
return best_state
if __name__ == "__main__":
def test_f1(x, y):
return (x**2) + (y**2)
# starting the problem with initial coordinates (12, 47)
prob = SearchProblem(x=12, y=47, step_size=1, function_to_optimize=test_f1)
local_min = simulated_annealing(
prob, find_max=False, max_x=100, min_x=5, max_y=50, min_y=-5, visualization=True
)
print(
"The minimum score for f(x, y) = x^2 + y^2 with the domain 100 > x > 5 "
f"and 50 > y > - 5 found via hill climbing: {local_min.score()}"
)
# starting the problem with initial coordinates (12, 47)
prob = SearchProblem(x=12, y=47, step_size=1, function_to_optimize=test_f1)
local_min = simulated_annealing(
prob, find_max=True, max_x=100, min_x=5, max_y=50, min_y=-5, visualization=True
)
print(
"The maximum score for f(x, y) = x^2 + y^2 with the domain 100 > x > 5 "
f"and 50 > y > - 5 found via hill climbing: {local_min.score()}"
)
def test_f2(x, y):
return (3 * x**2) - (6 * y)
prob = SearchProblem(x=3, y=4, step_size=1, function_to_optimize=test_f1)
local_min = simulated_annealing(prob, find_max=False, visualization=True)
print(
"The minimum score for f(x, y) = 3*x^2 - 6*y found via hill climbing: "
f"{local_min.score()}"
)
prob = SearchProblem(x=3, y=4, step_size=1, function_to_optimize=test_f1)
local_min = simulated_annealing(prob, find_max=True, visualization=True)
print(
"The maximum score for f(x, y) = 3*x^2 - 6*y found via hill climbing: "
f"{local_min.score()}"
)
| -1 |
TheAlgorithms/Python | 9,576 | Upgrade to Python 3.12 | ### Describe your change:
Repeats #8777
Repeats #9306
* #8777
* #9306
**Disables** algorithms based on `qiskit` and `tensorflow` because those modules are not yet compatible with Python 3.12.
* https://github.com/tensorflow/tensorflow/releases
* https://github.com/Qiskit/qiskit/issues/10887
---
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
| cclauss | "2023-10-03T08:13:43Z" | "2023-10-03T09:17:11Z" | f964dcbf2ff7c70e4aca20532a38dfb02ce8a4c0 | 0f4e51245f33175b4fb311f633d3821210741bdd | Upgrade to Python 3.12. ### Describe your change:
Repeats #8777
Repeats #9306
* #8777
* #9306
**Disables** algorithms based on `qiskit` and `tensorflow` because those modules are not yet compatible with Python 3.12.
* https://github.com/tensorflow/tensorflow/releases
* https://github.com/Qiskit/qiskit/issues/10887
---
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
| """
A pure Python implementation of the insertion sort algorithm
This algorithm sorts a collection by comparing adjacent elements.
When it finds that order is not respected, it moves the element compared
backward until the order is correct. It then goes back directly to the
element's initial position resuming forward comparison.
For doctests run following command:
python3 -m doctest -v insertion_sort.py
For manual testing run:
python3 insertion_sort.py
"""
from collections.abc import MutableSequence
from typing import Any, Protocol, TypeVar
class Comparable(Protocol):
def __lt__(self, other: Any, /) -> bool:
...
T = TypeVar("T", bound=Comparable)
def insertion_sort(collection: MutableSequence[T]) -> MutableSequence[T]:
"""A pure Python implementation of the insertion sort algorithm
:param collection: some mutable ordered collection with heterogeneous
comparable items inside
:return: the same collection ordered by ascending
Examples:
>>> insertion_sort([0, 5, 3, 2, 2])
[0, 2, 2, 3, 5]
>>> insertion_sort([]) == sorted([])
True
>>> insertion_sort([-2, -5, -45]) == sorted([-2, -5, -45])
True
>>> insertion_sort(['d', 'a', 'b', 'e', 'c']) == sorted(['d', 'a', 'b', 'e', 'c'])
True
>>> import random
>>> collection = random.sample(range(-50, 50), 100)
>>> insertion_sort(collection) == sorted(collection)
True
>>> import string
>>> collection = random.choices(string.ascii_letters + string.digits, k=100)
>>> insertion_sort(collection) == sorted(collection)
True
"""
for insert_index in range(1, len(collection)):
insert_value = collection[insert_index]
while insert_index > 0 and insert_value < collection[insert_index - 1]:
collection[insert_index] = collection[insert_index - 1]
insert_index -= 1
collection[insert_index] = insert_value
return collection
if __name__ == "__main__":
from doctest import testmod
testmod()
user_input = input("Enter numbers separated by a comma:\n").strip()
unsorted = [int(item) for item in user_input.split(",")]
print(f"{insertion_sort(unsorted) = }")
| """
A pure Python implementation of the insertion sort algorithm
This algorithm sorts a collection by comparing adjacent elements.
When it finds that order is not respected, it moves the element compared
backward until the order is correct. It then goes back directly to the
element's initial position resuming forward comparison.
For doctests run following command:
python3 -m doctest -v insertion_sort.py
For manual testing run:
python3 insertion_sort.py
"""
from collections.abc import MutableSequence
from typing import Any, Protocol, TypeVar
class Comparable(Protocol):
def __lt__(self, other: Any, /) -> bool:
...
T = TypeVar("T", bound=Comparable)
def insertion_sort(collection: MutableSequence[T]) -> MutableSequence[T]:
"""A pure Python implementation of the insertion sort algorithm
:param collection: some mutable ordered collection with heterogeneous
comparable items inside
:return: the same collection ordered by ascending
Examples:
>>> insertion_sort([0, 5, 3, 2, 2])
[0, 2, 2, 3, 5]
>>> insertion_sort([]) == sorted([])
True
>>> insertion_sort([-2, -5, -45]) == sorted([-2, -5, -45])
True
>>> insertion_sort(['d', 'a', 'b', 'e', 'c']) == sorted(['d', 'a', 'b', 'e', 'c'])
True
>>> import random
>>> collection = random.sample(range(-50, 50), 100)
>>> insertion_sort(collection) == sorted(collection)
True
>>> import string
>>> collection = random.choices(string.ascii_letters + string.digits, k=100)
>>> insertion_sort(collection) == sorted(collection)
True
"""
for insert_index in range(1, len(collection)):
insert_value = collection[insert_index]
while insert_index > 0 and insert_value < collection[insert_index - 1]:
collection[insert_index] = collection[insert_index - 1]
insert_index -= 1
collection[insert_index] = insert_value
return collection
if __name__ == "__main__":
from doctest import testmod
testmod()
user_input = input("Enter numbers separated by a comma:\n").strip()
unsorted = [int(item) for item in user_input.split(",")]
print(f"{insertion_sort(unsorted) = }")
| -1 |
TheAlgorithms/Python | 9,576 | Upgrade to Python 3.12 | ### Describe your change:
Repeats #8777
Repeats #9306
* #8777
* #9306
**Disables** algorithms based on `qiskit` and `tensorflow` because those modules are not yet compatible with Python 3.12.
* https://github.com/tensorflow/tensorflow/releases
* https://github.com/Qiskit/qiskit/issues/10887
---
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
| cclauss | "2023-10-03T08:13:43Z" | "2023-10-03T09:17:11Z" | f964dcbf2ff7c70e4aca20532a38dfb02ce8a4c0 | 0f4e51245f33175b4fb311f633d3821210741bdd | Upgrade to Python 3.12. ### Describe your change:
Repeats #8777
Repeats #9306
* #8777
* #9306
**Disables** algorithms based on `qiskit` and `tensorflow` because those modules are not yet compatible with Python 3.12.
* https://github.com/tensorflow/tensorflow/releases
* https://github.com/Qiskit/qiskit/issues/10887
---
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
| """
Calculate the Product Sum from a Special Array.
reference: https://dev.to/sfrasica/algorithms-product-sum-from-an-array-dc6
Python doctests can be run with the following command:
python -m doctest -v product_sum.py
Calculate the product sum of a "special" array which can contain integers or nested
arrays. The product sum is obtained by adding all elements and multiplying by their
respective depths.
For example, in the array [x, y], the product sum is (x + y). In the array [x, [y, z]],
the product sum is x + 2 * (y + z). In the array [x, [y, [z]]],
the product sum is x + 2 * (y + 3z).
Example Input:
[5, 2, [-7, 1], 3, [6, [-13, 8], 4]]
Output: 12
"""
def product_sum(arr: list[int | list], depth: int) -> int:
"""
Recursively calculates the product sum of an array.
The product sum of an array is defined as the sum of its elements multiplied by
their respective depths. If an element is a list, its product sum is calculated
recursively by multiplying the sum of its elements with its depth plus one.
Args:
arr: The array of integers and nested lists.
depth: The current depth level.
Returns:
int: The product sum of the array.
Examples:
>>> product_sum([1, 2, 3], 1)
6
>>> product_sum([-1, 2, [-3, 4]], 2)
8
>>> product_sum([1, 2, 3], -1)
-6
>>> product_sum([1, 2, 3], 0)
0
>>> product_sum([1, 2, 3], 7)
42
>>> product_sum((1, 2, 3), 7)
42
>>> product_sum({1, 2, 3}, 7)
42
>>> product_sum([1, -1], 1)
0
>>> product_sum([1, -2], 1)
-1
>>> product_sum([-3.5, [1, [0.5]]], 1)
1.5
"""
total_sum = 0
for ele in arr:
total_sum += product_sum(ele, depth + 1) if isinstance(ele, list) else ele
return total_sum * depth
def product_sum_array(array: list[int | list]) -> int:
"""
Calculates the product sum of an array.
Args:
array (List[Union[int, List]]): The array of integers and nested lists.
Returns:
int: The product sum of the array.
Examples:
>>> product_sum_array([1, 2, 3])
6
>>> product_sum_array([1, [2, 3]])
11
>>> product_sum_array([1, [2, [3, 4]]])
47
>>> product_sum_array([0])
0
>>> product_sum_array([-3.5, [1, [0.5]]])
1.5
>>> product_sum_array([1, -2])
-1
"""
return product_sum(array, 1)
if __name__ == "__main__":
import doctest
doctest.testmod()
| """
Calculate the Product Sum from a Special Array.
reference: https://dev.to/sfrasica/algorithms-product-sum-from-an-array-dc6
Python doctests can be run with the following command:
python -m doctest -v product_sum.py
Calculate the product sum of a "special" array which can contain integers or nested
arrays. The product sum is obtained by adding all elements and multiplying by their
respective depths.
For example, in the array [x, y], the product sum is (x + y). In the array [x, [y, z]],
the product sum is x + 2 * (y + z). In the array [x, [y, [z]]],
the product sum is x + 2 * (y + 3z).
Example Input:
[5, 2, [-7, 1], 3, [6, [-13, 8], 4]]
Output: 12
"""
def product_sum(arr: list[int | list], depth: int) -> int:
"""
Recursively calculates the product sum of an array.
The product sum of an array is defined as the sum of its elements multiplied by
their respective depths. If an element is a list, its product sum is calculated
recursively by multiplying the sum of its elements with its depth plus one.
Args:
arr: The array of integers and nested lists.
depth: The current depth level.
Returns:
int: The product sum of the array.
Examples:
>>> product_sum([1, 2, 3], 1)
6
>>> product_sum([-1, 2, [-3, 4]], 2)
8
>>> product_sum([1, 2, 3], -1)
-6
>>> product_sum([1, 2, 3], 0)
0
>>> product_sum([1, 2, 3], 7)
42
>>> product_sum((1, 2, 3), 7)
42
>>> product_sum({1, 2, 3}, 7)
42
>>> product_sum([1, -1], 1)
0
>>> product_sum([1, -2], 1)
-1
>>> product_sum([-3.5, [1, [0.5]]], 1)
1.5
"""
total_sum = 0
for ele in arr:
total_sum += product_sum(ele, depth + 1) if isinstance(ele, list) else ele
return total_sum * depth
def product_sum_array(array: list[int | list]) -> int:
"""
Calculates the product sum of an array.
Args:
array (List[Union[int, List]]): The array of integers and nested lists.
Returns:
int: The product sum of the array.
Examples:
>>> product_sum_array([1, 2, 3])
6
>>> product_sum_array([1, [2, 3]])
11
>>> product_sum_array([1, [2, [3, 4]]])
47
>>> product_sum_array([0])
0
>>> product_sum_array([-3.5, [1, [0.5]]])
1.5
>>> product_sum_array([1, -2])
-1
"""
return product_sum(array, 1)
if __name__ == "__main__":
import doctest
doctest.testmod()
| -1 |
TheAlgorithms/Python | 9,576 | Upgrade to Python 3.12 | ### Describe your change:
Repeats #8777
Repeats #9306
* #8777
* #9306
**Disables** algorithms based on `qiskit` and `tensorflow` because those modules are not yet compatible with Python 3.12.
* https://github.com/tensorflow/tensorflow/releases
* https://github.com/Qiskit/qiskit/issues/10887
---
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
| cclauss | "2023-10-03T08:13:43Z" | "2023-10-03T09:17:11Z" | f964dcbf2ff7c70e4aca20532a38dfb02ce8a4c0 | 0f4e51245f33175b4fb311f633d3821210741bdd | Upgrade to Python 3.12. ### Describe your change:
Repeats #8777
Repeats #9306
* #8777
* #9306
**Disables** algorithms based on `qiskit` and `tensorflow` because those modules are not yet compatible with Python 3.12.
* https://github.com/tensorflow/tensorflow/releases
* https://github.com/Qiskit/qiskit/issues/10887
---
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
| -1 |
||
TheAlgorithms/Python | 9,576 | Upgrade to Python 3.12 | ### Describe your change:
Repeats #8777
Repeats #9306
* #8777
* #9306
**Disables** algorithms based on `qiskit` and `tensorflow` because those modules are not yet compatible with Python 3.12.
* https://github.com/tensorflow/tensorflow/releases
* https://github.com/Qiskit/qiskit/issues/10887
---
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
| cclauss | "2023-10-03T08:13:43Z" | "2023-10-03T09:17:11Z" | f964dcbf2ff7c70e4aca20532a38dfb02ce8a4c0 | 0f4e51245f33175b4fb311f633d3821210741bdd | Upgrade to Python 3.12. ### Describe your change:
Repeats #8777
Repeats #9306
* #8777
* #9306
**Disables** algorithms based on `qiskit` and `tensorflow` because those modules are not yet compatible with Python 3.12.
* https://github.com/tensorflow/tensorflow/releases
* https://github.com/Qiskit/qiskit/issues/10887
---
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
| """Convert a Decimal Number to a Binary Number."""
def decimal_to_binary_iterative(num: int) -> str:
"""
Convert an Integer Decimal Number to a Binary Number as str.
>>> decimal_to_binary_iterative(0)
'0b0'
>>> decimal_to_binary_iterative(2)
'0b10'
>>> decimal_to_binary_iterative(7)
'0b111'
>>> decimal_to_binary_iterative(35)
'0b100011'
>>> # negatives work too
>>> decimal_to_binary_iterative(-2)
'-0b10'
>>> # other floats will error
>>> decimal_to_binary_iterative(16.16) # doctest: +ELLIPSIS
Traceback (most recent call last):
...
TypeError: 'float' object cannot be interpreted as an integer
>>> # strings will error as well
>>> decimal_to_binary_iterative('0xfffff') # doctest: +ELLIPSIS
Traceback (most recent call last):
...
TypeError: 'str' object cannot be interpreted as an integer
"""
if isinstance(num, float):
raise TypeError("'float' object cannot be interpreted as an integer")
if isinstance(num, str):
raise TypeError("'str' object cannot be interpreted as an integer")
if num == 0:
return "0b0"
negative = False
if num < 0:
negative = True
num = -num
binary: list[int] = []
while num > 0:
binary.insert(0, num % 2)
num >>= 1
if negative:
return "-0b" + "".join(str(e) for e in binary)
return "0b" + "".join(str(e) for e in binary)
def decimal_to_binary_recursive_helper(decimal: int) -> str:
"""
Take a positive integer value and return its binary equivalent.
>>> decimal_to_binary_recursive_helper(1000)
'1111101000'
>>> decimal_to_binary_recursive_helper("72")
'1001000'
>>> decimal_to_binary_recursive_helper("number")
Traceback (most recent call last):
...
ValueError: invalid literal for int() with base 10: 'number'
"""
decimal = int(decimal)
if decimal in (0, 1): # Exit cases for the recursion
return str(decimal)
div, mod = divmod(decimal, 2)
return decimal_to_binary_recursive_helper(div) + str(mod)
def decimal_to_binary_recursive(number: str) -> str:
"""
Take an integer value and raise ValueError for wrong inputs,
call the function above and return the output with prefix "0b" & "-0b"
for positive and negative integers respectively.
>>> decimal_to_binary_recursive(0)
'0b0'
>>> decimal_to_binary_recursive(40)
'0b101000'
>>> decimal_to_binary_recursive(-40)
'-0b101000'
>>> decimal_to_binary_recursive(40.8)
Traceback (most recent call last):
...
ValueError: Input value is not an integer
>>> decimal_to_binary_recursive("forty")
Traceback (most recent call last):
...
ValueError: Input value is not an integer
"""
number = str(number).strip()
if not number:
raise ValueError("No input value was provided")
negative = "-" if number.startswith("-") else ""
number = number.lstrip("-")
if not number.isnumeric():
raise ValueError("Input value is not an integer")
return f"{negative}0b{decimal_to_binary_recursive_helper(int(number))}"
if __name__ == "__main__":
import doctest
doctest.testmod()
print(decimal_to_binary_recursive(input("Input a decimal number: ")))
| """Convert a Decimal Number to a Binary Number."""
def decimal_to_binary_iterative(num: int) -> str:
"""
Convert an Integer Decimal Number to a Binary Number as str.
>>> decimal_to_binary_iterative(0)
'0b0'
>>> decimal_to_binary_iterative(2)
'0b10'
>>> decimal_to_binary_iterative(7)
'0b111'
>>> decimal_to_binary_iterative(35)
'0b100011'
>>> # negatives work too
>>> decimal_to_binary_iterative(-2)
'-0b10'
>>> # other floats will error
>>> decimal_to_binary_iterative(16.16) # doctest: +ELLIPSIS
Traceback (most recent call last):
...
TypeError: 'float' object cannot be interpreted as an integer
>>> # strings will error as well
>>> decimal_to_binary_iterative('0xfffff') # doctest: +ELLIPSIS
Traceback (most recent call last):
...
TypeError: 'str' object cannot be interpreted as an integer
"""
if isinstance(num, float):
raise TypeError("'float' object cannot be interpreted as an integer")
if isinstance(num, str):
raise TypeError("'str' object cannot be interpreted as an integer")
if num == 0:
return "0b0"
negative = False
if num < 0:
negative = True
num = -num
binary: list[int] = []
while num > 0:
binary.insert(0, num % 2)
num >>= 1
if negative:
return "-0b" + "".join(str(e) for e in binary)
return "0b" + "".join(str(e) for e in binary)
def decimal_to_binary_recursive_helper(decimal: int) -> str:
"""
Take a positive integer value and return its binary equivalent.
>>> decimal_to_binary_recursive_helper(1000)
'1111101000'
>>> decimal_to_binary_recursive_helper("72")
'1001000'
>>> decimal_to_binary_recursive_helper("number")
Traceback (most recent call last):
...
ValueError: invalid literal for int() with base 10: 'number'
"""
decimal = int(decimal)
if decimal in (0, 1): # Exit cases for the recursion
return str(decimal)
div, mod = divmod(decimal, 2)
return decimal_to_binary_recursive_helper(div) + str(mod)
def decimal_to_binary_recursive(number: str) -> str:
"""
Take an integer value and raise ValueError for wrong inputs,
call the function above and return the output with prefix "0b" & "-0b"
for positive and negative integers respectively.
>>> decimal_to_binary_recursive(0)
'0b0'
>>> decimal_to_binary_recursive(40)
'0b101000'
>>> decimal_to_binary_recursive(-40)
'-0b101000'
>>> decimal_to_binary_recursive(40.8)
Traceback (most recent call last):
...
ValueError: Input value is not an integer
>>> decimal_to_binary_recursive("forty")
Traceback (most recent call last):
...
ValueError: Input value is not an integer
"""
number = str(number).strip()
if not number:
raise ValueError("No input value was provided")
negative = "-" if number.startswith("-") else ""
number = number.lstrip("-")
if not number.isnumeric():
raise ValueError("Input value is not an integer")
return f"{negative}0b{decimal_to_binary_recursive_helper(int(number))}"
if __name__ == "__main__":
import doctest
doctest.testmod()
print(decimal_to_binary_recursive(input("Input a decimal number: ")))
| -1 |
TheAlgorithms/Python | 9,576 | Upgrade to Python 3.12 | ### Describe your change:
Repeats #8777
Repeats #9306
* #8777
* #9306
**Disables** algorithms based on `qiskit` and `tensorflow` because those modules are not yet compatible with Python 3.12.
* https://github.com/tensorflow/tensorflow/releases
* https://github.com/Qiskit/qiskit/issues/10887
---
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
| cclauss | "2023-10-03T08:13:43Z" | "2023-10-03T09:17:11Z" | f964dcbf2ff7c70e4aca20532a38dfb02ce8a4c0 | 0f4e51245f33175b4fb311f633d3821210741bdd | Upgrade to Python 3.12. ### Describe your change:
Repeats #8777
Repeats #9306
* #8777
* #9306
**Disables** algorithms based on `qiskit` and `tensorflow` because those modules are not yet compatible with Python 3.12.
* https://github.com/tensorflow/tensorflow/releases
* https://github.com/Qiskit/qiskit/issues/10887
---
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
| """
Modular Exponential.
Modular exponentiation is a type of exponentiation performed over a modulus.
For more explanation, please check
https://en.wikipedia.org/wiki/Modular_exponentiation
"""
"""Calculate Modular Exponential."""
def modular_exponential(base: int, power: int, mod: int):
"""
>>> modular_exponential(5, 0, 10)
1
>>> modular_exponential(2, 8, 7)
4
>>> modular_exponential(3, -2, 9)
-1
"""
if power < 0:
return -1
base %= mod
result = 1
while power > 0:
if power & 1:
result = (result * base) % mod
power = power >> 1
base = (base * base) % mod
return result
def main():
"""Call Modular Exponential Function."""
print(modular_exponential(3, 200, 13))
if __name__ == "__main__":
import doctest
doctest.testmod()
main()
| """
Modular Exponential.
Modular exponentiation is a type of exponentiation performed over a modulus.
For more explanation, please check
https://en.wikipedia.org/wiki/Modular_exponentiation
"""
"""Calculate Modular Exponential."""
def modular_exponential(base: int, power: int, mod: int):
"""
>>> modular_exponential(5, 0, 10)
1
>>> modular_exponential(2, 8, 7)
4
>>> modular_exponential(3, -2, 9)
-1
"""
if power < 0:
return -1
base %= mod
result = 1
while power > 0:
if power & 1:
result = (result * base) % mod
power = power >> 1
base = (base * base) % mod
return result
def main():
"""Call Modular Exponential Function."""
print(modular_exponential(3, 200, 13))
if __name__ == "__main__":
import doctest
doctest.testmod()
main()
| -1 |
TheAlgorithms/Python | 9,576 | Upgrade to Python 3.12 | ### Describe your change:
Repeats #8777
Repeats #9306
* #8777
* #9306
**Disables** algorithms based on `qiskit` and `tensorflow` because those modules are not yet compatible with Python 3.12.
* https://github.com/tensorflow/tensorflow/releases
* https://github.com/Qiskit/qiskit/issues/10887
---
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
| cclauss | "2023-10-03T08:13:43Z" | "2023-10-03T09:17:11Z" | f964dcbf2ff7c70e4aca20532a38dfb02ce8a4c0 | 0f4e51245f33175b4fb311f633d3821210741bdd | Upgrade to Python 3.12. ### Describe your change:
Repeats #8777
Repeats #9306
* #8777
* #9306
**Disables** algorithms based on `qiskit` and `tensorflow` because those modules are not yet compatible with Python 3.12.
* https://github.com/tensorflow/tensorflow/releases
* https://github.com/Qiskit/qiskit/issues/10887
---
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
| def hamming_distance(string1: str, string2: str) -> int:
"""Calculate the Hamming distance between two equal length strings
In information theory, the Hamming distance between two strings of equal
length is the number of positions at which the corresponding symbols are
different. https://en.wikipedia.org/wiki/Hamming_distance
Args:
string1 (str): Sequence 1
string2 (str): Sequence 2
Returns:
int: Hamming distance
>>> hamming_distance("python", "python")
0
>>> hamming_distance("karolin", "kathrin")
3
>>> hamming_distance("00000", "11111")
5
>>> hamming_distance("karolin", "kath")
Traceback (most recent call last):
...
ValueError: String lengths must match!
"""
if len(string1) != len(string2):
raise ValueError("String lengths must match!")
count = 0
for char1, char2 in zip(string1, string2):
if char1 != char2:
count += 1
return count
if __name__ == "__main__":
import doctest
doctest.testmod()
| def hamming_distance(string1: str, string2: str) -> int:
"""Calculate the Hamming distance between two equal length strings
In information theory, the Hamming distance between two strings of equal
length is the number of positions at which the corresponding symbols are
different. https://en.wikipedia.org/wiki/Hamming_distance
Args:
string1 (str): Sequence 1
string2 (str): Sequence 2
Returns:
int: Hamming distance
>>> hamming_distance("python", "python")
0
>>> hamming_distance("karolin", "kathrin")
3
>>> hamming_distance("00000", "11111")
5
>>> hamming_distance("karolin", "kath")
Traceback (most recent call last):
...
ValueError: String lengths must match!
"""
if len(string1) != len(string2):
raise ValueError("String lengths must match!")
count = 0
for char1, char2 in zip(string1, string2):
if char1 != char2:
count += 1
return count
if __name__ == "__main__":
import doctest
doctest.testmod()
| -1 |
TheAlgorithms/Python | 9,576 | Upgrade to Python 3.12 | ### Describe your change:
Repeats #8777
Repeats #9306
* #8777
* #9306
**Disables** algorithms based on `qiskit` and `tensorflow` because those modules are not yet compatible with Python 3.12.
* https://github.com/tensorflow/tensorflow/releases
* https://github.com/Qiskit/qiskit/issues/10887
---
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
| cclauss | "2023-10-03T08:13:43Z" | "2023-10-03T09:17:11Z" | f964dcbf2ff7c70e4aca20532a38dfb02ce8a4c0 | 0f4e51245f33175b4fb311f633d3821210741bdd | Upgrade to Python 3.12. ### Describe your change:
Repeats #8777
Repeats #9306
* #8777
* #9306
**Disables** algorithms based on `qiskit` and `tensorflow` because those modules are not yet compatible with Python 3.12.
* https://github.com/tensorflow/tensorflow/releases
* https://github.com/Qiskit/qiskit/issues/10887
---
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
| """
Algorithm for calculating the most cost-efficient sequence for converting one string
into another.
The only allowed operations are
--- Cost to copy a character is copy_cost
--- Cost to replace a character is replace_cost
--- Cost to delete a character is delete_cost
--- Cost to insert a character is insert_cost
"""
def compute_transform_tables(
source_string: str,
destination_string: str,
copy_cost: int,
replace_cost: int,
delete_cost: int,
insert_cost: int,
) -> tuple[list[list[int]], list[list[str]]]:
source_seq = list(source_string)
destination_seq = list(destination_string)
len_source_seq = len(source_seq)
len_destination_seq = len(destination_seq)
costs = [
[0 for _ in range(len_destination_seq + 1)] for _ in range(len_source_seq + 1)
]
ops = [
["0" for _ in range(len_destination_seq + 1)] for _ in range(len_source_seq + 1)
]
for i in range(1, len_source_seq + 1):
costs[i][0] = i * delete_cost
ops[i][0] = f"D{source_seq[i - 1]:c}"
for i in range(1, len_destination_seq + 1):
costs[0][i] = i * insert_cost
ops[0][i] = f"I{destination_seq[i - 1]:c}"
for i in range(1, len_source_seq + 1):
for j in range(1, len_destination_seq + 1):
if source_seq[i - 1] == destination_seq[j - 1]:
costs[i][j] = costs[i - 1][j - 1] + copy_cost
ops[i][j] = f"C{source_seq[i - 1]:c}"
else:
costs[i][j] = costs[i - 1][j - 1] + replace_cost
ops[i][j] = f"R{source_seq[i - 1]:c}" + str(destination_seq[j - 1])
if costs[i - 1][j] + delete_cost < costs[i][j]:
costs[i][j] = costs[i - 1][j] + delete_cost
ops[i][j] = f"D{source_seq[i - 1]:c}"
if costs[i][j - 1] + insert_cost < costs[i][j]:
costs[i][j] = costs[i][j - 1] + insert_cost
ops[i][j] = f"I{destination_seq[j - 1]:c}"
return costs, ops
def assemble_transformation(ops: list[list[str]], i: int, j: int) -> list[str]:
if i == 0 and j == 0:
return []
else:
if ops[i][j][0] in {"C", "R"}:
seq = assemble_transformation(ops, i - 1, j - 1)
seq.append(ops[i][j])
return seq
elif ops[i][j][0] == "D":
seq = assemble_transformation(ops, i - 1, j)
seq.append(ops[i][j])
return seq
else:
seq = assemble_transformation(ops, i, j - 1)
seq.append(ops[i][j])
return seq
if __name__ == "__main__":
_, operations = compute_transform_tables("Python", "Algorithms", -1, 1, 2, 2)
m = len(operations)
n = len(operations[0])
sequence = assemble_transformation(operations, m - 1, n - 1)
string = list("Python")
i = 0
cost = 0
with open("min_cost.txt", "w") as file:
for op in sequence:
print("".join(string))
if op[0] == "C":
file.write("%-16s" % "Copy %c" % op[1])
file.write("\t\t\t" + "".join(string))
file.write("\r\n")
cost -= 1
elif op[0] == "R":
string[i] = op[2]
file.write("%-16s" % ("Replace %c" % op[1] + " with " + str(op[2])))
file.write("\t\t" + "".join(string))
file.write("\r\n")
cost += 1
elif op[0] == "D":
string.pop(i)
file.write("%-16s" % "Delete %c" % op[1])
file.write("\t\t\t" + "".join(string))
file.write("\r\n")
cost += 2
else:
string.insert(i, op[1])
file.write("%-16s" % "Insert %c" % op[1])
file.write("\t\t\t" + "".join(string))
file.write("\r\n")
cost += 2
i += 1
print("".join(string))
print("Cost: ", cost)
file.write("\r\nMinimum cost: " + str(cost))
| """
Algorithm for calculating the most cost-efficient sequence for converting one string
into another.
The only allowed operations are
--- Cost to copy a character is copy_cost
--- Cost to replace a character is replace_cost
--- Cost to delete a character is delete_cost
--- Cost to insert a character is insert_cost
"""
def compute_transform_tables(
source_string: str,
destination_string: str,
copy_cost: int,
replace_cost: int,
delete_cost: int,
insert_cost: int,
) -> tuple[list[list[int]], list[list[str]]]:
source_seq = list(source_string)
destination_seq = list(destination_string)
len_source_seq = len(source_seq)
len_destination_seq = len(destination_seq)
costs = [
[0 for _ in range(len_destination_seq + 1)] for _ in range(len_source_seq + 1)
]
ops = [
["0" for _ in range(len_destination_seq + 1)] for _ in range(len_source_seq + 1)
]
for i in range(1, len_source_seq + 1):
costs[i][0] = i * delete_cost
ops[i][0] = f"D{source_seq[i - 1]:c}"
for i in range(1, len_destination_seq + 1):
costs[0][i] = i * insert_cost
ops[0][i] = f"I{destination_seq[i - 1]:c}"
for i in range(1, len_source_seq + 1):
for j in range(1, len_destination_seq + 1):
if source_seq[i - 1] == destination_seq[j - 1]:
costs[i][j] = costs[i - 1][j - 1] + copy_cost
ops[i][j] = f"C{source_seq[i - 1]:c}"
else:
costs[i][j] = costs[i - 1][j - 1] + replace_cost
ops[i][j] = f"R{source_seq[i - 1]:c}" + str(destination_seq[j - 1])
if costs[i - 1][j] + delete_cost < costs[i][j]:
costs[i][j] = costs[i - 1][j] + delete_cost
ops[i][j] = f"D{source_seq[i - 1]:c}"
if costs[i][j - 1] + insert_cost < costs[i][j]:
costs[i][j] = costs[i][j - 1] + insert_cost
ops[i][j] = f"I{destination_seq[j - 1]:c}"
return costs, ops
def assemble_transformation(ops: list[list[str]], i: int, j: int) -> list[str]:
if i == 0 and j == 0:
return []
else:
if ops[i][j][0] in {"C", "R"}:
seq = assemble_transformation(ops, i - 1, j - 1)
seq.append(ops[i][j])
return seq
elif ops[i][j][0] == "D":
seq = assemble_transformation(ops, i - 1, j)
seq.append(ops[i][j])
return seq
else:
seq = assemble_transformation(ops, i, j - 1)
seq.append(ops[i][j])
return seq
if __name__ == "__main__":
_, operations = compute_transform_tables("Python", "Algorithms", -1, 1, 2, 2)
m = len(operations)
n = len(operations[0])
sequence = assemble_transformation(operations, m - 1, n - 1)
string = list("Python")
i = 0
cost = 0
with open("min_cost.txt", "w") as file:
for op in sequence:
print("".join(string))
if op[0] == "C":
file.write("%-16s" % "Copy %c" % op[1])
file.write("\t\t\t" + "".join(string))
file.write("\r\n")
cost -= 1
elif op[0] == "R":
string[i] = op[2]
file.write("%-16s" % ("Replace %c" % op[1] + " with " + str(op[2])))
file.write("\t\t" + "".join(string))
file.write("\r\n")
cost += 1
elif op[0] == "D":
string.pop(i)
file.write("%-16s" % "Delete %c" % op[1])
file.write("\t\t\t" + "".join(string))
file.write("\r\n")
cost += 2
else:
string.insert(i, op[1])
file.write("%-16s" % "Insert %c" % op[1])
file.write("\t\t\t" + "".join(string))
file.write("\r\n")
cost += 2
i += 1
print("".join(string))
print("Cost: ", cost)
file.write("\r\nMinimum cost: " + str(cost))
| -1 |
TheAlgorithms/Python | 9,576 | Upgrade to Python 3.12 | ### Describe your change:
Repeats #8777
Repeats #9306
* #8777
* #9306
**Disables** algorithms based on `qiskit` and `tensorflow` because those modules are not yet compatible with Python 3.12.
* https://github.com/tensorflow/tensorflow/releases
* https://github.com/Qiskit/qiskit/issues/10887
---
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
| cclauss | "2023-10-03T08:13:43Z" | "2023-10-03T09:17:11Z" | f964dcbf2ff7c70e4aca20532a38dfb02ce8a4c0 | 0f4e51245f33175b4fb311f633d3821210741bdd | Upgrade to Python 3.12. ### Describe your change:
Repeats #8777
Repeats #9306
* #8777
* #9306
**Disables** algorithms based on `qiskit` and `tensorflow` because those modules are not yet compatible with Python 3.12.
* https://github.com/tensorflow/tensorflow/releases
* https://github.com/Qiskit/qiskit/issues/10887
---
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
| r"""
Problem:
The n queens problem is: placing N queens on a N * N chess board such that no queen
can attack any other queens placed on that chess board. This means that one queen
cannot have any other queen on its horizontal, vertical and diagonal lines.
Solution:
To solve this problem we will use simple math. First we know the queen can move in all
the possible ways, we can simplify it in this: vertical, horizontal, diagonal left and
diagonal right.
We can visualize it like this:
left diagonal = \
right diagonal = /
On a chessboard vertical movement could be the rows and horizontal movement could be
the columns.
In programming we can use an array, and in this array each index could be the rows and
each value in the array could be the column. For example:
. Q . . We have this chessboard with one queen in each column and each queen
. . . Q can't attack to each other.
Q . . . The array for this example would look like this: [1, 3, 0, 2]
. . Q .
So if we use an array and we verify that each value in the array is different to each
other we know that at least the queens can't attack each other in horizontal and
vertical.
At this point we have it halfway completed and we will treat the chessboard as a
Cartesian plane. Hereinafter we are going to remember basic math, so in the school we
learned this formula:
Slope of a line:
y2 - y1
m = ----------
x2 - x1
This formula allow us to get the slope. For the angles 45ΒΊ (right diagonal) and 135ΒΊ
(left diagonal) this formula gives us m = 1, and m = -1 respectively.
See::
https://www.enotes.com/homework-help/write-equation-line-that-hits-origin-45-degree-1474860
Then we have this other formula:
Slope intercept:
y = mx + b
b is where the line crosses the Y axis (to get more information see:
https://www.mathsisfun.com/y_intercept.html), if we change the formula to solve for b
we would have:
y - mx = b
And since we already have the m values for the angles 45ΒΊ and 135ΒΊ, this formula would
look like this:
45ΒΊ: y - (1)x = b
45ΒΊ: y - x = b
135ΒΊ: y - (-1)x = b
135ΒΊ: y + x = b
y = row
x = column
Applying these two formulas we can check if a queen in some position is being attacked
for another one or vice versa.
"""
from __future__ import annotations
def depth_first_search(
possible_board: list[int],
diagonal_right_collisions: list[int],
diagonal_left_collisions: list[int],
boards: list[list[str]],
n: int,
) -> None:
"""
>>> boards = []
>>> depth_first_search([], [], [], boards, 4)
>>> for board in boards:
... print(board)
['. Q . . ', '. . . Q ', 'Q . . . ', '. . Q . ']
['. . Q . ', 'Q . . . ', '. . . Q ', '. Q . . ']
"""
# Get next row in the current board (possible_board) to fill it with a queen
row = len(possible_board)
# If row is equal to the size of the board it means there are a queen in each row in
# the current board (possible_board)
if row == n:
# We convert the variable possible_board that looks like this: [1, 3, 0, 2] to
# this: ['. Q . . ', '. . . Q ', 'Q . . . ', '. . Q . ']
boards.append([". " * i + "Q " + ". " * (n - 1 - i) for i in possible_board])
return
# We iterate each column in the row to find all possible results in each row
for col in range(n):
# We apply that we learned previously. First we check that in the current board
# (possible_board) there are not other same value because if there is it means
# that there are a collision in vertical. Then we apply the two formulas we
# learned before:
#
# 45ΒΊ: y - x = b or 45: row - col = b
# 135ΒΊ: y + x = b or row + col = b.
#
# And we verify if the results of this two formulas not exist in their variables
# respectively. (diagonal_right_collisions, diagonal_left_collisions)
#
# If any or these are True it means there is a collision so we continue to the
# next value in the for loop.
if (
col in possible_board
or row - col in diagonal_right_collisions
or row + col in diagonal_left_collisions
):
continue
# If it is False we call dfs function again and we update the inputs
depth_first_search(
[*possible_board, col],
[*diagonal_right_collisions, row - col],
[*diagonal_left_collisions, row + col],
boards,
n,
)
def n_queens_solution(n: int) -> None:
boards: list[list[str]] = []
depth_first_search([], [], [], boards, n)
# Print all the boards
for board in boards:
for column in board:
print(column)
print("")
print(len(boards), "solutions were found.")
if __name__ == "__main__":
import doctest
doctest.testmod()
n_queens_solution(4)
| r"""
Problem:
The n queens problem is: placing N queens on a N * N chess board such that no queen
can attack any other queens placed on that chess board. This means that one queen
cannot have any other queen on its horizontal, vertical and diagonal lines.
Solution:
To solve this problem we will use simple math. First we know the queen can move in all
the possible ways, we can simplify it in this: vertical, horizontal, diagonal left and
diagonal right.
We can visualize it like this:
left diagonal = \
right diagonal = /
On a chessboard vertical movement could be the rows and horizontal movement could be
the columns.
In programming we can use an array, and in this array each index could be the rows and
each value in the array could be the column. For example:
. Q . . We have this chessboard with one queen in each column and each queen
. . . Q can't attack to each other.
Q . . . The array for this example would look like this: [1, 3, 0, 2]
. . Q .
So if we use an array and we verify that each value in the array is different to each
other we know that at least the queens can't attack each other in horizontal and
vertical.
At this point we have it halfway completed and we will treat the chessboard as a
Cartesian plane. Hereinafter we are going to remember basic math, so in the school we
learned this formula:
Slope of a line:
y2 - y1
m = ----------
x2 - x1
This formula allow us to get the slope. For the angles 45ΒΊ (right diagonal) and 135ΒΊ
(left diagonal) this formula gives us m = 1, and m = -1 respectively.
See::
https://www.enotes.com/homework-help/write-equation-line-that-hits-origin-45-degree-1474860
Then we have this other formula:
Slope intercept:
y = mx + b
b is where the line crosses the Y axis (to get more information see:
https://www.mathsisfun.com/y_intercept.html), if we change the formula to solve for b
we would have:
y - mx = b
And since we already have the m values for the angles 45ΒΊ and 135ΒΊ, this formula would
look like this:
45ΒΊ: y - (1)x = b
45ΒΊ: y - x = b
135ΒΊ: y - (-1)x = b
135ΒΊ: y + x = b
y = row
x = column
Applying these two formulas we can check if a queen in some position is being attacked
for another one or vice versa.
"""
from __future__ import annotations
def depth_first_search(
possible_board: list[int],
diagonal_right_collisions: list[int],
diagonal_left_collisions: list[int],
boards: list[list[str]],
n: int,
) -> None:
"""
>>> boards = []
>>> depth_first_search([], [], [], boards, 4)
>>> for board in boards:
... print(board)
['. Q . . ', '. . . Q ', 'Q . . . ', '. . Q . ']
['. . Q . ', 'Q . . . ', '. . . Q ', '. Q . . ']
"""
# Get next row in the current board (possible_board) to fill it with a queen
row = len(possible_board)
# If row is equal to the size of the board it means there are a queen in each row in
# the current board (possible_board)
if row == n:
# We convert the variable possible_board that looks like this: [1, 3, 0, 2] to
# this: ['. Q . . ', '. . . Q ', 'Q . . . ', '. . Q . ']
boards.append([". " * i + "Q " + ". " * (n - 1 - i) for i in possible_board])
return
# We iterate each column in the row to find all possible results in each row
for col in range(n):
# We apply that we learned previously. First we check that in the current board
# (possible_board) there are not other same value because if there is it means
# that there are a collision in vertical. Then we apply the two formulas we
# learned before:
#
# 45ΒΊ: y - x = b or 45: row - col = b
# 135ΒΊ: y + x = b or row + col = b.
#
# And we verify if the results of this two formulas not exist in their variables
# respectively. (diagonal_right_collisions, diagonal_left_collisions)
#
# If any or these are True it means there is a collision so we continue to the
# next value in the for loop.
if (
col in possible_board
or row - col in diagonal_right_collisions
or row + col in diagonal_left_collisions
):
continue
# If it is False we call dfs function again and we update the inputs
depth_first_search(
[*possible_board, col],
[*diagonal_right_collisions, row - col],
[*diagonal_left_collisions, row + col],
boards,
n,
)
def n_queens_solution(n: int) -> None:
boards: list[list[str]] = []
depth_first_search([], [], [], boards, n)
# Print all the boards
for board in boards:
for column in board:
print(column)
print("")
print(len(boards), "solutions were found.")
if __name__ == "__main__":
import doctest
doctest.testmod()
n_queens_solution(4)
| -1 |
TheAlgorithms/Python | 9,576 | Upgrade to Python 3.12 | ### Describe your change:
Repeats #8777
Repeats #9306
* #8777
* #9306
**Disables** algorithms based on `qiskit` and `tensorflow` because those modules are not yet compatible with Python 3.12.
* https://github.com/tensorflow/tensorflow/releases
* https://github.com/Qiskit/qiskit/issues/10887
---
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
| cclauss | "2023-10-03T08:13:43Z" | "2023-10-03T09:17:11Z" | f964dcbf2ff7c70e4aca20532a38dfb02ce8a4c0 | 0f4e51245f33175b4fb311f633d3821210741bdd | Upgrade to Python 3.12. ### Describe your change:
Repeats #8777
Repeats #9306
* #8777
* #9306
**Disables** algorithms based on `qiskit` and `tensorflow` because those modules are not yet compatible with Python 3.12.
* https://github.com/tensorflow/tensorflow/releases
* https://github.com/Qiskit/qiskit/issues/10887
---
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
| from __future__ import annotations
from collections.abc import Callable
from typing import Generic, TypeVar
T = TypeVar("T")
U = TypeVar("U")
class DoubleLinkedListNode(Generic[T, U]):
"""
Double Linked List Node built specifically for LFU Cache
>>> node = DoubleLinkedListNode(1,1)
>>> node
Node: key: 1, val: 1, freq: 0, has next: False, has prev: False
"""
def __init__(self, key: T | None, val: U | None):
self.key = key
self.val = val
self.freq: int = 0
self.next: DoubleLinkedListNode[T, U] | None = None
self.prev: DoubleLinkedListNode[T, U] | None = None
def __repr__(self) -> str:
return "Node: key: {}, val: {}, freq: {}, has next: {}, has prev: {}".format(
self.key, self.val, self.freq, self.next is not None, self.prev is not None
)
class DoubleLinkedList(Generic[T, U]):
"""
Double Linked List built specifically for LFU Cache
>>> dll: DoubleLinkedList = DoubleLinkedList()
>>> dll
DoubleLinkedList,
Node: key: None, val: None, freq: 0, has next: True, has prev: False,
Node: key: None, val: None, freq: 0, has next: False, has prev: True
>>> first_node = DoubleLinkedListNode(1,10)
>>> first_node
Node: key: 1, val: 10, freq: 0, has next: False, has prev: False
>>> dll.add(first_node)
>>> dll
DoubleLinkedList,
Node: key: None, val: None, freq: 0, has next: True, has prev: False,
Node: key: 1, val: 10, freq: 1, has next: True, has prev: True,
Node: key: None, val: None, freq: 0, has next: False, has prev: True
>>> # node is mutated
>>> first_node
Node: key: 1, val: 10, freq: 1, has next: True, has prev: True
>>> second_node = DoubleLinkedListNode(2,20)
>>> second_node
Node: key: 2, val: 20, freq: 0, has next: False, has prev: False
>>> dll.add(second_node)
>>> dll
DoubleLinkedList,
Node: key: None, val: None, freq: 0, has next: True, has prev: False,
Node: key: 1, val: 10, freq: 1, has next: True, has prev: True,
Node: key: 2, val: 20, freq: 1, has next: True, has prev: True,
Node: key: None, val: None, freq: 0, has next: False, has prev: True
>>> removed_node = dll.remove(first_node)
>>> assert removed_node == first_node
>>> dll
DoubleLinkedList,
Node: key: None, val: None, freq: 0, has next: True, has prev: False,
Node: key: 2, val: 20, freq: 1, has next: True, has prev: True,
Node: key: None, val: None, freq: 0, has next: False, has prev: True
>>> # Attempt to remove node not on list
>>> removed_node = dll.remove(first_node)
>>> removed_node is None
True
>>> # Attempt to remove head or rear
>>> dll.head
Node: key: None, val: None, freq: 0, has next: True, has prev: False
>>> dll.remove(dll.head) is None
True
>>> # Attempt to remove head or rear
>>> dll.rear
Node: key: None, val: None, freq: 0, has next: False, has prev: True
>>> dll.remove(dll.rear) is None
True
"""
def __init__(self) -> None:
self.head: DoubleLinkedListNode[T, U] = DoubleLinkedListNode(None, None)
self.rear: DoubleLinkedListNode[T, U] = DoubleLinkedListNode(None, None)
self.head.next, self.rear.prev = self.rear, self.head
def __repr__(self) -> str:
rep = ["DoubleLinkedList"]
node = self.head
while node.next is not None:
rep.append(str(node))
node = node.next
rep.append(str(self.rear))
return ",\n ".join(rep)
def add(self, node: DoubleLinkedListNode[T, U]) -> None:
"""
Adds the given node at the tail of the list and shifting it to proper position
"""
previous = self.rear.prev
# All nodes other than self.head are guaranteed to have non-None previous
assert previous is not None
previous.next = node
node.prev = previous
self.rear.prev = node
node.next = self.rear
node.freq += 1
self._position_node(node)
def _position_node(self, node: DoubleLinkedListNode[T, U]) -> None:
"""
Moves node forward to maintain invariant of sort by freq value
"""
while node.prev is not None and node.prev.freq > node.freq:
# swap node with previous node
previous_node = node.prev
node.prev = previous_node.prev
previous_node.next = node.prev
node.next = previous_node
previous_node.prev = node
def remove(
self, node: DoubleLinkedListNode[T, U]
) -> DoubleLinkedListNode[T, U] | None:
"""
Removes and returns the given node from the list
Returns None if node.prev or node.next is None
"""
if node.prev is None or node.next is None:
return None
node.prev.next = node.next
node.next.prev = node.prev
node.prev = None
node.next = None
return node
class LFUCache(Generic[T, U]):
"""
LFU Cache to store a given capacity of data. Can be used as a stand-alone object
or as a function decorator.
>>> cache = LFUCache(2)
>>> cache.put(1, 1)
>>> cache.put(2, 2)
>>> cache.get(1)
1
>>> cache.put(3, 3)
>>> cache.get(2) is None
True
>>> cache.put(4, 4)
>>> cache.get(1) is None
True
>>> cache.get(3)
3
>>> cache.get(4)
4
>>> cache
CacheInfo(hits=3, misses=2, capacity=2, current_size=2)
>>> @LFUCache.decorator(100)
... def fib(num):
... if num in (1, 2):
... return 1
... return fib(num - 1) + fib(num - 2)
>>> for i in range(1, 101):
... res = fib(i)
>>> fib.cache_info()
CacheInfo(hits=196, misses=100, capacity=100, current_size=100)
"""
# class variable to map the decorator functions to their respective instance
decorator_function_to_instance_map: dict[Callable[[T], U], LFUCache[T, U]] = {}
def __init__(self, capacity: int):
self.list: DoubleLinkedList[T, U] = DoubleLinkedList()
self.capacity = capacity
self.num_keys = 0
self.hits = 0
self.miss = 0
self.cache: dict[T, DoubleLinkedListNode[T, U]] = {}
def __repr__(self) -> str:
"""
Return the details for the cache instance
[hits, misses, capacity, current_size]
"""
return (
f"CacheInfo(hits={self.hits}, misses={self.miss}, "
f"capacity={self.capacity}, current_size={self.num_keys})"
)
def __contains__(self, key: T) -> bool:
"""
>>> cache = LFUCache(1)
>>> 1 in cache
False
>>> cache.put(1, 1)
>>> 1 in cache
True
"""
return key in self.cache
def get(self, key: T) -> U | None:
"""
Returns the value for the input key and updates the Double Linked List. Returns
Returns None if key is not present in cache
"""
if key in self.cache:
self.hits += 1
value_node: DoubleLinkedListNode[T, U] = self.cache[key]
node = self.list.remove(self.cache[key])
assert node == value_node
# node is guaranteed not None because it is in self.cache
assert node is not None
self.list.add(node)
return node.val
self.miss += 1
return None
def put(self, key: T, value: U) -> None:
"""
Sets the value for the input key and updates the Double Linked List
"""
if key not in self.cache:
if self.num_keys >= self.capacity:
# delete first node when over capacity
first_node = self.list.head.next
# guaranteed to have a non-None first node when num_keys > 0
# explain to type checker via assertions
assert first_node is not None
assert first_node.key is not None
assert self.list.remove(first_node) is not None
# first_node guaranteed to be in list
del self.cache[first_node.key]
self.num_keys -= 1
self.cache[key] = DoubleLinkedListNode(key, value)
self.list.add(self.cache[key])
self.num_keys += 1
else:
node = self.list.remove(self.cache[key])
assert node is not None # node guaranteed to be in list
node.val = value
self.list.add(node)
@classmethod
def decorator(
cls: type[LFUCache[T, U]], size: int = 128
) -> Callable[[Callable[[T], U]], Callable[..., U]]:
"""
Decorator version of LFU Cache
Decorated function must be function of T -> U
"""
def cache_decorator_inner(func: Callable[[T], U]) -> Callable[..., U]:
def cache_decorator_wrapper(*args: T) -> U:
if func not in cls.decorator_function_to_instance_map:
cls.decorator_function_to_instance_map[func] = LFUCache(size)
result = cls.decorator_function_to_instance_map[func].get(args[0])
if result is None:
result = func(*args)
cls.decorator_function_to_instance_map[func].put(args[0], result)
return result
def cache_info() -> LFUCache[T, U]:
return cls.decorator_function_to_instance_map[func]
setattr(cache_decorator_wrapper, "cache_info", cache_info) # noqa: B010
return cache_decorator_wrapper
return cache_decorator_inner
if __name__ == "__main__":
import doctest
doctest.testmod()
| from __future__ import annotations
from collections.abc import Callable
from typing import Generic, TypeVar
T = TypeVar("T")
U = TypeVar("U")
class DoubleLinkedListNode(Generic[T, U]):
"""
Double Linked List Node built specifically for LFU Cache
>>> node = DoubleLinkedListNode(1,1)
>>> node
Node: key: 1, val: 1, freq: 0, has next: False, has prev: False
"""
def __init__(self, key: T | None, val: U | None):
self.key = key
self.val = val
self.freq: int = 0
self.next: DoubleLinkedListNode[T, U] | None = None
self.prev: DoubleLinkedListNode[T, U] | None = None
def __repr__(self) -> str:
return "Node: key: {}, val: {}, freq: {}, has next: {}, has prev: {}".format(
self.key, self.val, self.freq, self.next is not None, self.prev is not None
)
class DoubleLinkedList(Generic[T, U]):
"""
Double Linked List built specifically for LFU Cache
>>> dll: DoubleLinkedList = DoubleLinkedList()
>>> dll
DoubleLinkedList,
Node: key: None, val: None, freq: 0, has next: True, has prev: False,
Node: key: None, val: None, freq: 0, has next: False, has prev: True
>>> first_node = DoubleLinkedListNode(1,10)
>>> first_node
Node: key: 1, val: 10, freq: 0, has next: False, has prev: False
>>> dll.add(first_node)
>>> dll
DoubleLinkedList,
Node: key: None, val: None, freq: 0, has next: True, has prev: False,
Node: key: 1, val: 10, freq: 1, has next: True, has prev: True,
Node: key: None, val: None, freq: 0, has next: False, has prev: True
>>> # node is mutated
>>> first_node
Node: key: 1, val: 10, freq: 1, has next: True, has prev: True
>>> second_node = DoubleLinkedListNode(2,20)
>>> second_node
Node: key: 2, val: 20, freq: 0, has next: False, has prev: False
>>> dll.add(second_node)
>>> dll
DoubleLinkedList,
Node: key: None, val: None, freq: 0, has next: True, has prev: False,
Node: key: 1, val: 10, freq: 1, has next: True, has prev: True,
Node: key: 2, val: 20, freq: 1, has next: True, has prev: True,
Node: key: None, val: None, freq: 0, has next: False, has prev: True
>>> removed_node = dll.remove(first_node)
>>> assert removed_node == first_node
>>> dll
DoubleLinkedList,
Node: key: None, val: None, freq: 0, has next: True, has prev: False,
Node: key: 2, val: 20, freq: 1, has next: True, has prev: True,
Node: key: None, val: None, freq: 0, has next: False, has prev: True
>>> # Attempt to remove node not on list
>>> removed_node = dll.remove(first_node)
>>> removed_node is None
True
>>> # Attempt to remove head or rear
>>> dll.head
Node: key: None, val: None, freq: 0, has next: True, has prev: False
>>> dll.remove(dll.head) is None
True
>>> # Attempt to remove head or rear
>>> dll.rear
Node: key: None, val: None, freq: 0, has next: False, has prev: True
>>> dll.remove(dll.rear) is None
True
"""
def __init__(self) -> None:
self.head: DoubleLinkedListNode[T, U] = DoubleLinkedListNode(None, None)
self.rear: DoubleLinkedListNode[T, U] = DoubleLinkedListNode(None, None)
self.head.next, self.rear.prev = self.rear, self.head
def __repr__(self) -> str:
rep = ["DoubleLinkedList"]
node = self.head
while node.next is not None:
rep.append(str(node))
node = node.next
rep.append(str(self.rear))
return ",\n ".join(rep)
def add(self, node: DoubleLinkedListNode[T, U]) -> None:
"""
Adds the given node at the tail of the list and shifting it to proper position
"""
previous = self.rear.prev
# All nodes other than self.head are guaranteed to have non-None previous
assert previous is not None
previous.next = node
node.prev = previous
self.rear.prev = node
node.next = self.rear
node.freq += 1
self._position_node(node)
def _position_node(self, node: DoubleLinkedListNode[T, U]) -> None:
"""
Moves node forward to maintain invariant of sort by freq value
"""
while node.prev is not None and node.prev.freq > node.freq:
# swap node with previous node
previous_node = node.prev
node.prev = previous_node.prev
previous_node.next = node.prev
node.next = previous_node
previous_node.prev = node
def remove(
self, node: DoubleLinkedListNode[T, U]
) -> DoubleLinkedListNode[T, U] | None:
"""
Removes and returns the given node from the list
Returns None if node.prev or node.next is None
"""
if node.prev is None or node.next is None:
return None
node.prev.next = node.next
node.next.prev = node.prev
node.prev = None
node.next = None
return node
class LFUCache(Generic[T, U]):
"""
LFU Cache to store a given capacity of data. Can be used as a stand-alone object
or as a function decorator.
>>> cache = LFUCache(2)
>>> cache.put(1, 1)
>>> cache.put(2, 2)
>>> cache.get(1)
1
>>> cache.put(3, 3)
>>> cache.get(2) is None
True
>>> cache.put(4, 4)
>>> cache.get(1) is None
True
>>> cache.get(3)
3
>>> cache.get(4)
4
>>> cache
CacheInfo(hits=3, misses=2, capacity=2, current_size=2)
>>> @LFUCache.decorator(100)
... def fib(num):
... if num in (1, 2):
... return 1
... return fib(num - 1) + fib(num - 2)
>>> for i in range(1, 101):
... res = fib(i)
>>> fib.cache_info()
CacheInfo(hits=196, misses=100, capacity=100, current_size=100)
"""
# class variable to map the decorator functions to their respective instance
decorator_function_to_instance_map: dict[Callable[[T], U], LFUCache[T, U]] = {}
def __init__(self, capacity: int):
self.list: DoubleLinkedList[T, U] = DoubleLinkedList()
self.capacity = capacity
self.num_keys = 0
self.hits = 0
self.miss = 0
self.cache: dict[T, DoubleLinkedListNode[T, U]] = {}
def __repr__(self) -> str:
"""
Return the details for the cache instance
[hits, misses, capacity, current_size]
"""
return (
f"CacheInfo(hits={self.hits}, misses={self.miss}, "
f"capacity={self.capacity}, current_size={self.num_keys})"
)
def __contains__(self, key: T) -> bool:
"""
>>> cache = LFUCache(1)
>>> 1 in cache
False
>>> cache.put(1, 1)
>>> 1 in cache
True
"""
return key in self.cache
def get(self, key: T) -> U | None:
"""
Returns the value for the input key and updates the Double Linked List. Returns
Returns None if key is not present in cache
"""
if key in self.cache:
self.hits += 1
value_node: DoubleLinkedListNode[T, U] = self.cache[key]
node = self.list.remove(self.cache[key])
assert node == value_node
# node is guaranteed not None because it is in self.cache
assert node is not None
self.list.add(node)
return node.val
self.miss += 1
return None
def put(self, key: T, value: U) -> None:
"""
Sets the value for the input key and updates the Double Linked List
"""
if key not in self.cache:
if self.num_keys >= self.capacity:
# delete first node when over capacity
first_node = self.list.head.next
# guaranteed to have a non-None first node when num_keys > 0
# explain to type checker via assertions
assert first_node is not None
assert first_node.key is not None
assert self.list.remove(first_node) is not None
# first_node guaranteed to be in list
del self.cache[first_node.key]
self.num_keys -= 1
self.cache[key] = DoubleLinkedListNode(key, value)
self.list.add(self.cache[key])
self.num_keys += 1
else:
node = self.list.remove(self.cache[key])
assert node is not None # node guaranteed to be in list
node.val = value
self.list.add(node)
@classmethod
def decorator(
cls: type[LFUCache[T, U]], size: int = 128
) -> Callable[[Callable[[T], U]], Callable[..., U]]:
"""
Decorator version of LFU Cache
Decorated function must be function of T -> U
"""
def cache_decorator_inner(func: Callable[[T], U]) -> Callable[..., U]:
def cache_decorator_wrapper(*args: T) -> U:
if func not in cls.decorator_function_to_instance_map:
cls.decorator_function_to_instance_map[func] = LFUCache(size)
result = cls.decorator_function_to_instance_map[func].get(args[0])
if result is None:
result = func(*args)
cls.decorator_function_to_instance_map[func].put(args[0], result)
return result
def cache_info() -> LFUCache[T, U]:
return cls.decorator_function_to_instance_map[func]
setattr(cache_decorator_wrapper, "cache_info", cache_info) # noqa: B010
return cache_decorator_wrapper
return cache_decorator_inner
if __name__ == "__main__":
import doctest
doctest.testmod()
| -1 |
TheAlgorithms/Python | 9,576 | Upgrade to Python 3.12 | ### Describe your change:
Repeats #8777
Repeats #9306
* #8777
* #9306
**Disables** algorithms based on `qiskit` and `tensorflow` because those modules are not yet compatible with Python 3.12.
* https://github.com/tensorflow/tensorflow/releases
* https://github.com/Qiskit/qiskit/issues/10887
---
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
| cclauss | "2023-10-03T08:13:43Z" | "2023-10-03T09:17:11Z" | f964dcbf2ff7c70e4aca20532a38dfb02ce8a4c0 | 0f4e51245f33175b4fb311f633d3821210741bdd | Upgrade to Python 3.12. ### Describe your change:
Repeats #8777
Repeats #9306
* #8777
* #9306
**Disables** algorithms based on `qiskit` and `tensorflow` because those modules are not yet compatible with Python 3.12.
* https://github.com/tensorflow/tensorflow/releases
* https://github.com/Qiskit/qiskit/issues/10887
---
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
| """
Illustrate how to add the integer without arithmetic operation
Author: suraj Kumar
Time Complexity: 1
https://en.wikipedia.org/wiki/Bitwise_operation
"""
def add(first: int, second: int) -> int:
"""
Implementation of addition of integer
Examples:
>>> add(3, 5)
8
>>> add(13, 5)
18
>>> add(-7, 2)
-5
>>> add(0, -7)
-7
>>> add(-321, 0)
-321
"""
while second != 0:
c = first & second
first ^= second
second = c << 1
return first
if __name__ == "__main__":
import doctest
doctest.testmod()
first = int(input("Enter the first number: ").strip())
second = int(input("Enter the second number: ").strip())
print(f"{add(first, second) = }")
| """
Illustrate how to add the integer without arithmetic operation
Author: suraj Kumar
Time Complexity: 1
https://en.wikipedia.org/wiki/Bitwise_operation
"""
def add(first: int, second: int) -> int:
"""
Implementation of addition of integer
Examples:
>>> add(3, 5)
8
>>> add(13, 5)
18
>>> add(-7, 2)
-5
>>> add(0, -7)
-7
>>> add(-321, 0)
-321
"""
while second != 0:
c = first & second
first ^= second
second = c << 1
return first
if __name__ == "__main__":
import doctest
doctest.testmod()
first = int(input("Enter the first number: ").strip())
second = int(input("Enter the second number: ").strip())
print(f"{add(first, second) = }")
| -1 |
TheAlgorithms/Python | 9,576 | Upgrade to Python 3.12 | ### Describe your change:
Repeats #8777
Repeats #9306
* #8777
* #9306
**Disables** algorithms based on `qiskit` and `tensorflow` because those modules are not yet compatible with Python 3.12.
* https://github.com/tensorflow/tensorflow/releases
* https://github.com/Qiskit/qiskit/issues/10887
---
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
| cclauss | "2023-10-03T08:13:43Z" | "2023-10-03T09:17:11Z" | f964dcbf2ff7c70e4aca20532a38dfb02ce8a4c0 | 0f4e51245f33175b4fb311f633d3821210741bdd | Upgrade to Python 3.12. ### Describe your change:
Repeats #8777
Repeats #9306
* #8777
* #9306
**Disables** algorithms based on `qiskit` and `tensorflow` because those modules are not yet compatible with Python 3.12.
* https://github.com/tensorflow/tensorflow/releases
* https://github.com/Qiskit/qiskit/issues/10887
---
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
| from PIL import Image
def change_brightness(img: Image, level: float) -> Image:
"""
Change the brightness of a PIL Image to a given level.
"""
def brightness(c: int) -> float:
"""
Fundamental Transformation/Operation that'll be performed on
every bit.
"""
return 128 + level + (c - 128)
if not -255.0 <= level <= 255.0:
raise ValueError("level must be between -255.0 (black) and 255.0 (white)")
return img.point(brightness)
if __name__ == "__main__":
# Load image
with Image.open("image_data/lena.jpg") as img:
# Change brightness to 100
brigt_img = change_brightness(img, 100)
brigt_img.save("image_data/lena_brightness.png", format="png")
| from PIL import Image
def change_brightness(img: Image, level: float) -> Image:
"""
Change the brightness of a PIL Image to a given level.
"""
def brightness(c: int) -> float:
"""
Fundamental Transformation/Operation that'll be performed on
every bit.
"""
return 128 + level + (c - 128)
if not -255.0 <= level <= 255.0:
raise ValueError("level must be between -255.0 (black) and 255.0 (white)")
return img.point(brightness)
if __name__ == "__main__":
# Load image
with Image.open("image_data/lena.jpg") as img:
# Change brightness to 100
brigt_img = change_brightness(img, 100)
brigt_img.save("image_data/lena_brightness.png", format="png")
| -1 |
TheAlgorithms/Python | 9,576 | Upgrade to Python 3.12 | ### Describe your change:
Repeats #8777
Repeats #9306
* #8777
* #9306
**Disables** algorithms based on `qiskit` and `tensorflow` because those modules are not yet compatible with Python 3.12.
* https://github.com/tensorflow/tensorflow/releases
* https://github.com/Qiskit/qiskit/issues/10887
---
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
| cclauss | "2023-10-03T08:13:43Z" | "2023-10-03T09:17:11Z" | f964dcbf2ff7c70e4aca20532a38dfb02ce8a4c0 | 0f4e51245f33175b4fb311f633d3821210741bdd | Upgrade to Python 3.12. ### Describe your change:
Repeats #8777
Repeats #9306
* #8777
* #9306
**Disables** algorithms based on `qiskit` and `tensorflow` because those modules are not yet compatible with Python 3.12.
* https://github.com/tensorflow/tensorflow/releases
* https://github.com/Qiskit/qiskit/issues/10887
---
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
| #
| #
| -1 |
TheAlgorithms/Python | 9,576 | Upgrade to Python 3.12 | ### Describe your change:
Repeats #8777
Repeats #9306
* #8777
* #9306
**Disables** algorithms based on `qiskit` and `tensorflow` because those modules are not yet compatible with Python 3.12.
* https://github.com/tensorflow/tensorflow/releases
* https://github.com/Qiskit/qiskit/issues/10887
---
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
| cclauss | "2023-10-03T08:13:43Z" | "2023-10-03T09:17:11Z" | f964dcbf2ff7c70e4aca20532a38dfb02ce8a4c0 | 0f4e51245f33175b4fb311f633d3821210741bdd | Upgrade to Python 3.12. ### Describe your change:
Repeats #8777
Repeats #9306
* #8777
* #9306
**Disables** algorithms based on `qiskit` and `tensorflow` because those modules are not yet compatible with Python 3.12.
* https://github.com/tensorflow/tensorflow/releases
* https://github.com/Qiskit/qiskit/issues/10887
---
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
| """
Numerical integration or quadrature for a smooth function f with known values at x_i
This method is the classical approach of suming 'Equally Spaced Abscissas'
method 2:
"Simpson Rule"
"""
def method_2(boundary, steps):
# "Simpson Rule"
# int(f) = delta_x/2 * (b-a)/3*(f1 + 4f2 + 2f_3 + ... + fn)
h = (boundary[1] - boundary[0]) / steps
a = boundary[0]
b = boundary[1]
x_i = make_points(a, b, h)
y = 0.0
y += (h / 3.0) * f(a)
cnt = 2
for i in x_i:
y += (h / 3) * (4 - 2 * (cnt % 2)) * f(i)
cnt += 1
y += (h / 3.0) * f(b)
return y
def make_points(a, b, h):
x = a + h
while x < (b - h):
yield x
x = x + h
def f(x): # enter your function here
y = (x - 0) * (x - 0)
return y
def main():
a = 0.0 # Lower bound of integration
b = 1.0 # Upper bound of integration
steps = 10.0 # define number of steps or resolution
boundary = [a, b] # define boundary of integration
y = method_2(boundary, steps)
print(f"y = {y}")
if __name__ == "__main__":
main()
| """
Numerical integration or quadrature for a smooth function f with known values at x_i
This method is the classical approach of suming 'Equally Spaced Abscissas'
method 2:
"Simpson Rule"
"""
def method_2(boundary, steps):
# "Simpson Rule"
# int(f) = delta_x/2 * (b-a)/3*(f1 + 4f2 + 2f_3 + ... + fn)
h = (boundary[1] - boundary[0]) / steps
a = boundary[0]
b = boundary[1]
x_i = make_points(a, b, h)
y = 0.0
y += (h / 3.0) * f(a)
cnt = 2
for i in x_i:
y += (h / 3) * (4 - 2 * (cnt % 2)) * f(i)
cnt += 1
y += (h / 3.0) * f(b)
return y
def make_points(a, b, h):
x = a + h
while x < (b - h):
yield x
x = x + h
def f(x): # enter your function here
y = (x - 0) * (x - 0)
return y
def main():
a = 0.0 # Lower bound of integration
b = 1.0 # Upper bound of integration
steps = 10.0 # define number of steps or resolution
boundary = [a, b] # define boundary of integration
y = method_2(boundary, steps)
print(f"y = {y}")
if __name__ == "__main__":
main()
| -1 |
TheAlgorithms/Python | 9,576 | Upgrade to Python 3.12 | ### Describe your change:
Repeats #8777
Repeats #9306
* #8777
* #9306
**Disables** algorithms based on `qiskit` and `tensorflow` because those modules are not yet compatible with Python 3.12.
* https://github.com/tensorflow/tensorflow/releases
* https://github.com/Qiskit/qiskit/issues/10887
---
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
| cclauss | "2023-10-03T08:13:43Z" | "2023-10-03T09:17:11Z" | f964dcbf2ff7c70e4aca20532a38dfb02ce8a4c0 | 0f4e51245f33175b4fb311f633d3821210741bdd | Upgrade to Python 3.12. ### Describe your change:
Repeats #8777
Repeats #9306
* #8777
* #9306
**Disables** algorithms based on `qiskit` and `tensorflow` because those modules are not yet compatible with Python 3.12.
* https://github.com/tensorflow/tensorflow/releases
* https://github.com/Qiskit/qiskit/issues/10887
---
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
| -1 |
||
TheAlgorithms/Python | 9,576 | Upgrade to Python 3.12 | ### Describe your change:
Repeats #8777
Repeats #9306
* #8777
* #9306
**Disables** algorithms based on `qiskit` and `tensorflow` because those modules are not yet compatible with Python 3.12.
* https://github.com/tensorflow/tensorflow/releases
* https://github.com/Qiskit/qiskit/issues/10887
---
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
| cclauss | "2023-10-03T08:13:43Z" | "2023-10-03T09:17:11Z" | f964dcbf2ff7c70e4aca20532a38dfb02ce8a4c0 | 0f4e51245f33175b4fb311f633d3821210741bdd | Upgrade to Python 3.12. ### Describe your change:
Repeats #8777
Repeats #9306
* #8777
* #9306
**Disables** algorithms based on `qiskit` and `tensorflow` because those modules are not yet compatible with Python 3.12.
* https://github.com/tensorflow/tensorflow/releases
* https://github.com/Qiskit/qiskit/issues/10887
---
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
| from __future__ import annotations
import re
def natural_sort(input_list: list[str]) -> list[str]:
"""
Sort the given list of strings in the way that humans expect.
The normal Python sort algorithm sorts lexicographically,
so you might not get the results that you expect...
>>> example1 = ['2 ft 7 in', '1 ft 5 in', '10 ft 2 in', '2 ft 11 in', '7 ft 6 in']
>>> sorted(example1)
['1 ft 5 in', '10 ft 2 in', '2 ft 11 in', '2 ft 7 in', '7 ft 6 in']
>>> # The natural sort algorithm sort based on meaning and not computer code point.
>>> natural_sort(example1)
['1 ft 5 in', '2 ft 7 in', '2 ft 11 in', '7 ft 6 in', '10 ft 2 in']
>>> example2 = ['Elm11', 'Elm12', 'Elm2', 'elm0', 'elm1', 'elm10', 'elm13', 'elm9']
>>> sorted(example2)
['Elm11', 'Elm12', 'Elm2', 'elm0', 'elm1', 'elm10', 'elm13', 'elm9']
>>> natural_sort(example2)
['elm0', 'elm1', 'Elm2', 'elm9', 'elm10', 'Elm11', 'Elm12', 'elm13']
"""
def alphanum_key(key):
return [int(s) if s.isdigit() else s.lower() for s in re.split("([0-9]+)", key)]
return sorted(input_list, key=alphanum_key)
if __name__ == "__main__":
import doctest
doctest.testmod()
| from __future__ import annotations
import re
def natural_sort(input_list: list[str]) -> list[str]:
"""
Sort the given list of strings in the way that humans expect.
The normal Python sort algorithm sorts lexicographically,
so you might not get the results that you expect...
>>> example1 = ['2 ft 7 in', '1 ft 5 in', '10 ft 2 in', '2 ft 11 in', '7 ft 6 in']
>>> sorted(example1)
['1 ft 5 in', '10 ft 2 in', '2 ft 11 in', '2 ft 7 in', '7 ft 6 in']
>>> # The natural sort algorithm sort based on meaning and not computer code point.
>>> natural_sort(example1)
['1 ft 5 in', '2 ft 7 in', '2 ft 11 in', '7 ft 6 in', '10 ft 2 in']
>>> example2 = ['Elm11', 'Elm12', 'Elm2', 'elm0', 'elm1', 'elm10', 'elm13', 'elm9']
>>> sorted(example2)
['Elm11', 'Elm12', 'Elm2', 'elm0', 'elm1', 'elm10', 'elm13', 'elm9']
>>> natural_sort(example2)
['elm0', 'elm1', 'Elm2', 'elm9', 'elm10', 'Elm11', 'Elm12', 'elm13']
"""
def alphanum_key(key):
return [int(s) if s.isdigit() else s.lower() for s in re.split("([0-9]+)", key)]
return sorted(input_list, key=alphanum_key)
if __name__ == "__main__":
import doctest
doctest.testmod()
| -1 |
TheAlgorithms/Python | 9,576 | Upgrade to Python 3.12 | ### Describe your change:
Repeats #8777
Repeats #9306
* #8777
* #9306
**Disables** algorithms based on `qiskit` and `tensorflow` because those modules are not yet compatible with Python 3.12.
* https://github.com/tensorflow/tensorflow/releases
* https://github.com/Qiskit/qiskit/issues/10887
---
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
| cclauss | "2023-10-03T08:13:43Z" | "2023-10-03T09:17:11Z" | f964dcbf2ff7c70e4aca20532a38dfb02ce8a4c0 | 0f4e51245f33175b4fb311f633d3821210741bdd | Upgrade to Python 3.12. ### Describe your change:
Repeats #8777
Repeats #9306
* #8777
* #9306
**Disables** algorithms based on `qiskit` and `tensorflow` because those modules are not yet compatible with Python 3.12.
* https://github.com/tensorflow/tensorflow/releases
* https://github.com/Qiskit/qiskit/issues/10887
---
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
| #!/usr/bin/env python3
"""
DavisβPutnamβLogemannβLoveland (DPLL) algorithm is a complete, backtracking-based
search algorithm for deciding the satisfiability of propositional logic formulae in
conjunctive normal form, i.e, for solving the Conjunctive Normal Form SATisfiability
(CNF-SAT) problem.
For more information about the algorithm: https://en.wikipedia.org/wiki/DPLL_algorithm
"""
from __future__ import annotations
import random
from collections.abc import Iterable
class Clause:
"""
A clause represented in Conjunctive Normal Form.
A clause is a set of literals, either complemented or otherwise.
For example:
{A1, A2, A3'} is the clause (A1 v A2 v A3')
{A5', A2', A1} is the clause (A5' v A2' v A1)
Create model
>>> clause = Clause(["A1", "A2'", "A3"])
>>> clause.evaluate({"A1": True})
True
"""
def __init__(self, literals: list[str]) -> None:
"""
Represent the literals and an assignment in a clause."
"""
# Assign all literals to None initially
self.literals: dict[str, bool | None] = {literal: None for literal in literals}
def __str__(self) -> str:
"""
To print a clause as in Conjunctive Normal Form.
>>> str(Clause(["A1", "A2'", "A3"]))
"{A1 , A2' , A3}"
"""
return "{" + " , ".join(self.literals) + "}"
def __len__(self) -> int:
"""
To print a clause as in Conjunctive Normal Form.
>>> len(Clause([]))
0
>>> len(Clause(["A1", "A2'", "A3"]))
3
"""
return len(self.literals)
def assign(self, model: dict[str, bool | None]) -> None:
"""
Assign values to literals of the clause as given by model.
"""
for literal in self.literals:
symbol = literal[:2]
if symbol in model:
value = model[symbol]
else:
continue
if value is not None:
# Complement assignment if literal is in complemented form
if literal.endswith("'"):
value = not value
self.literals[literal] = value
def evaluate(self, model: dict[str, bool | None]) -> bool | None:
"""
Evaluates the clause with the assignments in model.
This has the following steps:
1. Return True if both a literal and its complement exist in the clause.
2. Return True if a single literal has the assignment True.
3. Return None(unable to complete evaluation) if a literal has no assignment.
4. Compute disjunction of all values assigned in clause.
"""
for literal in self.literals:
symbol = literal.rstrip("'") if literal.endswith("'") else literal + "'"
if symbol in self.literals:
return True
self.assign(model)
for value in self.literals.values():
if value in (True, None):
return value
return any(self.literals.values())
class Formula:
"""
A formula represented in Conjunctive Normal Form.
A formula is a set of clauses.
For example,
{{A1, A2, A3'}, {A5', A2', A1}} is ((A1 v A2 v A3') and (A5' v A2' v A1))
"""
def __init__(self, clauses: Iterable[Clause]) -> None:
"""
Represent the number of clauses and the clauses themselves.
"""
self.clauses = list(clauses)
def __str__(self) -> str:
"""
To print a formula as in Conjunctive Normal Form.
str(Formula([Clause(["A1", "A2'", "A3"]), Clause(["A5'", "A2'", "A1"])]))
"{{A1 , A2' , A3} , {A5' , A2' , A1}}"
"""
return "{" + " , ".join(str(clause) for clause in self.clauses) + "}"
def generate_clause() -> Clause:
"""
Randomly generate a clause.
All literals have the name Ax, where x is an integer from 1 to 5.
"""
literals = []
no_of_literals = random.randint(1, 5)
base_var = "A"
i = 0
while i < no_of_literals:
var_no = random.randint(1, 5)
var_name = base_var + str(var_no)
var_complement = random.randint(0, 1)
if var_complement == 1:
var_name += "'"
if var_name in literals:
i -= 1
else:
literals.append(var_name)
i += 1
return Clause(literals)
def generate_formula() -> Formula:
"""
Randomly generate a formula.
"""
clauses: set[Clause] = set()
no_of_clauses = random.randint(1, 10)
while len(clauses) < no_of_clauses:
clauses.add(generate_clause())
return Formula(clauses)
def generate_parameters(formula: Formula) -> tuple[list[Clause], list[str]]:
"""
Return the clauses and symbols from a formula.
A symbol is the uncomplemented form of a literal.
For example,
Symbol of A3 is A3.
Symbol of A5' is A5.
>>> formula = Formula([Clause(["A1", "A2'", "A3"]), Clause(["A5'", "A2'", "A1"])])
>>> clauses, symbols = generate_parameters(formula)
>>> clauses_list = [str(i) for i in clauses]
>>> clauses_list
["{A1 , A2' , A3}", "{A5' , A2' , A1}"]
>>> symbols
['A1', 'A2', 'A3', 'A5']
"""
clauses = formula.clauses
symbols_set = []
for clause in formula.clauses:
for literal in clause.literals:
symbol = literal[:2]
if symbol not in symbols_set:
symbols_set.append(symbol)
return clauses, symbols_set
def find_pure_symbols(
clauses: list[Clause], symbols: list[str], model: dict[str, bool | None]
) -> tuple[list[str], dict[str, bool | None]]:
"""
Return pure symbols and their values to satisfy clause.
Pure symbols are symbols in a formula that exist only
in one form, either complemented or otherwise.
For example,
{ { A4 , A3 , A5' , A1 , A3' } , { A4 } , { A3 } } has
pure symbols A4, A5' and A1.
This has the following steps:
1. Ignore clauses that have already evaluated to be True.
2. Find symbols that occur only in one form in the rest of the clauses.
3. Assign value True or False depending on whether the symbols occurs
in normal or complemented form respectively.
>>> formula = Formula([Clause(["A1", "A2'", "A3"]), Clause(["A5'", "A2'", "A1"])])
>>> clauses, symbols = generate_parameters(formula)
>>> pure_symbols, values = find_pure_symbols(clauses, symbols, {})
>>> pure_symbols
['A1', 'A2', 'A3', 'A5']
>>> values
{'A1': True, 'A2': False, 'A3': True, 'A5': False}
"""
pure_symbols = []
assignment: dict[str, bool | None] = {}
literals = []
for clause in clauses:
if clause.evaluate(model):
continue
for literal in clause.literals:
literals.append(literal)
for s in symbols:
sym = s + "'"
if (s in literals and sym not in literals) or (
s not in literals and sym in literals
):
pure_symbols.append(s)
for p in pure_symbols:
assignment[p] = None
for s in pure_symbols:
sym = s + "'"
if s in literals:
assignment[s] = True
elif sym in literals:
assignment[s] = False
return pure_symbols, assignment
def find_unit_clauses(
clauses: list[Clause], model: dict[str, bool | None]
) -> tuple[list[str], dict[str, bool | None]]:
"""
Returns the unit symbols and their values to satisfy clause.
Unit symbols are symbols in a formula that are:
- Either the only symbol in a clause
- Or all other literals in that clause have been assigned False
This has the following steps:
1. Find symbols that are the only occurrences in a clause.
2. Find symbols in a clause where all other literals are assigned False.
3. Assign True or False depending on whether the symbols occurs in
normal or complemented form respectively.
>>> clause1 = Clause(["A4", "A3", "A5'", "A1", "A3'"])
>>> clause2 = Clause(["A4"])
>>> clause3 = Clause(["A3"])
>>> clauses, symbols = generate_parameters(Formula([clause1, clause2, clause3]))
>>> unit_clauses, values = find_unit_clauses(clauses, {})
>>> unit_clauses
['A4', 'A3']
>>> values
{'A4': True, 'A3': True}
"""
unit_symbols = []
for clause in clauses:
if len(clause) == 1:
unit_symbols.append(next(iter(clause.literals.keys())))
else:
f_count, n_count = 0, 0
for literal, value in clause.literals.items():
if value is False:
f_count += 1
elif value is None:
sym = literal
n_count += 1
if f_count == len(clause) - 1 and n_count == 1:
unit_symbols.append(sym)
assignment: dict[str, bool | None] = {}
for i in unit_symbols:
symbol = i[:2]
assignment[symbol] = len(i) == 2
unit_symbols = [i[:2] for i in unit_symbols]
return unit_symbols, assignment
def dpll_algorithm(
clauses: list[Clause], symbols: list[str], model: dict[str, bool | None]
) -> tuple[bool | None, dict[str, bool | None] | None]:
"""
Returns the model if the formula is satisfiable, else None
This has the following steps:
1. If every clause in clauses is True, return True.
2. If some clause in clauses is False, return False.
3. Find pure symbols.
4. Find unit symbols.
>>> formula = Formula([Clause(["A4", "A3", "A5'", "A1", "A3'"]), Clause(["A4"])])
>>> clauses, symbols = generate_parameters(formula)
>>> soln, model = dpll_algorithm(clauses, symbols, {})
>>> soln
True
>>> model
{'A4': True}
"""
check_clause_all_true = True
for clause in clauses:
clause_check = clause.evaluate(model)
if clause_check is False:
return False, None
elif clause_check is None:
check_clause_all_true = False
continue
if check_clause_all_true:
return True, model
try:
pure_symbols, assignment = find_pure_symbols(clauses, symbols, model)
except RecursionError:
print("raises a RecursionError and is")
return None, {}
p = None
if len(pure_symbols) > 0:
p, value = pure_symbols[0], assignment[pure_symbols[0]]
if p:
tmp_model = model
tmp_model[p] = value
tmp_symbols = list(symbols)
if p in tmp_symbols:
tmp_symbols.remove(p)
return dpll_algorithm(clauses, tmp_symbols, tmp_model)
unit_symbols, assignment = find_unit_clauses(clauses, model)
p = None
if len(unit_symbols) > 0:
p, value = unit_symbols[0], assignment[unit_symbols[0]]
if p:
tmp_model = model
tmp_model[p] = value
tmp_symbols = list(symbols)
if p in tmp_symbols:
tmp_symbols.remove(p)
return dpll_algorithm(clauses, tmp_symbols, tmp_model)
p = symbols[0]
rest = symbols[1:]
tmp1, tmp2 = model, model
tmp1[p], tmp2[p] = True, False
return dpll_algorithm(clauses, rest, tmp1) or dpll_algorithm(clauses, rest, tmp2)
if __name__ == "__main__":
import doctest
doctest.testmod()
formula = generate_formula()
print(f"The formula {formula} is", end=" ")
clauses, symbols = generate_parameters(formula)
solution, model = dpll_algorithm(clauses, symbols, {})
if solution:
print(f"satisfiable with the assignment {model}.")
else:
print("not satisfiable.")
| #!/usr/bin/env python3
"""
DavisβPutnamβLogemannβLoveland (DPLL) algorithm is a complete, backtracking-based
search algorithm for deciding the satisfiability of propositional logic formulae in
conjunctive normal form, i.e, for solving the Conjunctive Normal Form SATisfiability
(CNF-SAT) problem.
For more information about the algorithm: https://en.wikipedia.org/wiki/DPLL_algorithm
"""
from __future__ import annotations
import random
from collections.abc import Iterable
class Clause:
"""
A clause represented in Conjunctive Normal Form.
A clause is a set of literals, either complemented or otherwise.
For example:
{A1, A2, A3'} is the clause (A1 v A2 v A3')
{A5', A2', A1} is the clause (A5' v A2' v A1)
Create model
>>> clause = Clause(["A1", "A2'", "A3"])
>>> clause.evaluate({"A1": True})
True
"""
def __init__(self, literals: list[str]) -> None:
"""
Represent the literals and an assignment in a clause."
"""
# Assign all literals to None initially
self.literals: dict[str, bool | None] = {literal: None for literal in literals}
def __str__(self) -> str:
"""
To print a clause as in Conjunctive Normal Form.
>>> str(Clause(["A1", "A2'", "A3"]))
"{A1 , A2' , A3}"
"""
return "{" + " , ".join(self.literals) + "}"
def __len__(self) -> int:
"""
To print a clause as in Conjunctive Normal Form.
>>> len(Clause([]))
0
>>> len(Clause(["A1", "A2'", "A3"]))
3
"""
return len(self.literals)
def assign(self, model: dict[str, bool | None]) -> None:
"""
Assign values to literals of the clause as given by model.
"""
for literal in self.literals:
symbol = literal[:2]
if symbol in model:
value = model[symbol]
else:
continue
if value is not None:
# Complement assignment if literal is in complemented form
if literal.endswith("'"):
value = not value
self.literals[literal] = value
def evaluate(self, model: dict[str, bool | None]) -> bool | None:
"""
Evaluates the clause with the assignments in model.
This has the following steps:
1. Return True if both a literal and its complement exist in the clause.
2. Return True if a single literal has the assignment True.
3. Return None(unable to complete evaluation) if a literal has no assignment.
4. Compute disjunction of all values assigned in clause.
"""
for literal in self.literals:
symbol = literal.rstrip("'") if literal.endswith("'") else literal + "'"
if symbol in self.literals:
return True
self.assign(model)
for value in self.literals.values():
if value in (True, None):
return value
return any(self.literals.values())
class Formula:
"""
A formula represented in Conjunctive Normal Form.
A formula is a set of clauses.
For example,
{{A1, A2, A3'}, {A5', A2', A1}} is ((A1 v A2 v A3') and (A5' v A2' v A1))
"""
def __init__(self, clauses: Iterable[Clause]) -> None:
"""
Represent the number of clauses and the clauses themselves.
"""
self.clauses = list(clauses)
def __str__(self) -> str:
"""
To print a formula as in Conjunctive Normal Form.
str(Formula([Clause(["A1", "A2'", "A3"]), Clause(["A5'", "A2'", "A1"])]))
"{{A1 , A2' , A3} , {A5' , A2' , A1}}"
"""
return "{" + " , ".join(str(clause) for clause in self.clauses) + "}"
def generate_clause() -> Clause:
"""
Randomly generate a clause.
All literals have the name Ax, where x is an integer from 1 to 5.
"""
literals = []
no_of_literals = random.randint(1, 5)
base_var = "A"
i = 0
while i < no_of_literals:
var_no = random.randint(1, 5)
var_name = base_var + str(var_no)
var_complement = random.randint(0, 1)
if var_complement == 1:
var_name += "'"
if var_name in literals:
i -= 1
else:
literals.append(var_name)
i += 1
return Clause(literals)
def generate_formula() -> Formula:
"""
Randomly generate a formula.
"""
clauses: set[Clause] = set()
no_of_clauses = random.randint(1, 10)
while len(clauses) < no_of_clauses:
clauses.add(generate_clause())
return Formula(clauses)
def generate_parameters(formula: Formula) -> tuple[list[Clause], list[str]]:
"""
Return the clauses and symbols from a formula.
A symbol is the uncomplemented form of a literal.
For example,
Symbol of A3 is A3.
Symbol of A5' is A5.
>>> formula = Formula([Clause(["A1", "A2'", "A3"]), Clause(["A5'", "A2'", "A1"])])
>>> clauses, symbols = generate_parameters(formula)
>>> clauses_list = [str(i) for i in clauses]
>>> clauses_list
["{A1 , A2' , A3}", "{A5' , A2' , A1}"]
>>> symbols
['A1', 'A2', 'A3', 'A5']
"""
clauses = formula.clauses
symbols_set = []
for clause in formula.clauses:
for literal in clause.literals:
symbol = literal[:2]
if symbol not in symbols_set:
symbols_set.append(symbol)
return clauses, symbols_set
def find_pure_symbols(
clauses: list[Clause], symbols: list[str], model: dict[str, bool | None]
) -> tuple[list[str], dict[str, bool | None]]:
"""
Return pure symbols and their values to satisfy clause.
Pure symbols are symbols in a formula that exist only
in one form, either complemented or otherwise.
For example,
{ { A4 , A3 , A5' , A1 , A3' } , { A4 } , { A3 } } has
pure symbols A4, A5' and A1.
This has the following steps:
1. Ignore clauses that have already evaluated to be True.
2. Find symbols that occur only in one form in the rest of the clauses.
3. Assign value True or False depending on whether the symbols occurs
in normal or complemented form respectively.
>>> formula = Formula([Clause(["A1", "A2'", "A3"]), Clause(["A5'", "A2'", "A1"])])
>>> clauses, symbols = generate_parameters(formula)
>>> pure_symbols, values = find_pure_symbols(clauses, symbols, {})
>>> pure_symbols
['A1', 'A2', 'A3', 'A5']
>>> values
{'A1': True, 'A2': False, 'A3': True, 'A5': False}
"""
pure_symbols = []
assignment: dict[str, bool | None] = {}
literals = []
for clause in clauses:
if clause.evaluate(model):
continue
for literal in clause.literals:
literals.append(literal)
for s in symbols:
sym = s + "'"
if (s in literals and sym not in literals) or (
s not in literals and sym in literals
):
pure_symbols.append(s)
for p in pure_symbols:
assignment[p] = None
for s in pure_symbols:
sym = s + "'"
if s in literals:
assignment[s] = True
elif sym in literals:
assignment[s] = False
return pure_symbols, assignment
def find_unit_clauses(
clauses: list[Clause], model: dict[str, bool | None]
) -> tuple[list[str], dict[str, bool | None]]:
"""
Returns the unit symbols and their values to satisfy clause.
Unit symbols are symbols in a formula that are:
- Either the only symbol in a clause
- Or all other literals in that clause have been assigned False
This has the following steps:
1. Find symbols that are the only occurrences in a clause.
2. Find symbols in a clause where all other literals are assigned False.
3. Assign True or False depending on whether the symbols occurs in
normal or complemented form respectively.
>>> clause1 = Clause(["A4", "A3", "A5'", "A1", "A3'"])
>>> clause2 = Clause(["A4"])
>>> clause3 = Clause(["A3"])
>>> clauses, symbols = generate_parameters(Formula([clause1, clause2, clause3]))
>>> unit_clauses, values = find_unit_clauses(clauses, {})
>>> unit_clauses
['A4', 'A3']
>>> values
{'A4': True, 'A3': True}
"""
unit_symbols = []
for clause in clauses:
if len(clause) == 1:
unit_symbols.append(next(iter(clause.literals.keys())))
else:
f_count, n_count = 0, 0
for literal, value in clause.literals.items():
if value is False:
f_count += 1
elif value is None:
sym = literal
n_count += 1
if f_count == len(clause) - 1 and n_count == 1:
unit_symbols.append(sym)
assignment: dict[str, bool | None] = {}
for i in unit_symbols:
symbol = i[:2]
assignment[symbol] = len(i) == 2
unit_symbols = [i[:2] for i in unit_symbols]
return unit_symbols, assignment
def dpll_algorithm(
clauses: list[Clause], symbols: list[str], model: dict[str, bool | None]
) -> tuple[bool | None, dict[str, bool | None] | None]:
"""
Returns the model if the formula is satisfiable, else None
This has the following steps:
1. If every clause in clauses is True, return True.
2. If some clause in clauses is False, return False.
3. Find pure symbols.
4. Find unit symbols.
>>> formula = Formula([Clause(["A4", "A3", "A5'", "A1", "A3'"]), Clause(["A4"])])
>>> clauses, symbols = generate_parameters(formula)
>>> soln, model = dpll_algorithm(clauses, symbols, {})
>>> soln
True
>>> model
{'A4': True}
"""
check_clause_all_true = True
for clause in clauses:
clause_check = clause.evaluate(model)
if clause_check is False:
return False, None
elif clause_check is None:
check_clause_all_true = False
continue
if check_clause_all_true:
return True, model
try:
pure_symbols, assignment = find_pure_symbols(clauses, symbols, model)
except RecursionError:
print("raises a RecursionError and is")
return None, {}
p = None
if len(pure_symbols) > 0:
p, value = pure_symbols[0], assignment[pure_symbols[0]]
if p:
tmp_model = model
tmp_model[p] = value
tmp_symbols = list(symbols)
if p in tmp_symbols:
tmp_symbols.remove(p)
return dpll_algorithm(clauses, tmp_symbols, tmp_model)
unit_symbols, assignment = find_unit_clauses(clauses, model)
p = None
if len(unit_symbols) > 0:
p, value = unit_symbols[0], assignment[unit_symbols[0]]
if p:
tmp_model = model
tmp_model[p] = value
tmp_symbols = list(symbols)
if p in tmp_symbols:
tmp_symbols.remove(p)
return dpll_algorithm(clauses, tmp_symbols, tmp_model)
p = symbols[0]
rest = symbols[1:]
tmp1, tmp2 = model, model
tmp1[p], tmp2[p] = True, False
return dpll_algorithm(clauses, rest, tmp1) or dpll_algorithm(clauses, rest, tmp2)
if __name__ == "__main__":
import doctest
doctest.testmod()
formula = generate_formula()
print(f"The formula {formula} is", end=" ")
clauses, symbols = generate_parameters(formula)
solution, model = dpll_algorithm(clauses, symbols, {})
if solution:
print(f"satisfiable with the assignment {model}.")
else:
print("not satisfiable.")
| -1 |
TheAlgorithms/Python | 9,576 | Upgrade to Python 3.12 | ### Describe your change:
Repeats #8777
Repeats #9306
* #8777
* #9306
**Disables** algorithms based on `qiskit` and `tensorflow` because those modules are not yet compatible with Python 3.12.
* https://github.com/tensorflow/tensorflow/releases
* https://github.com/Qiskit/qiskit/issues/10887
---
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
| cclauss | "2023-10-03T08:13:43Z" | "2023-10-03T09:17:11Z" | f964dcbf2ff7c70e4aca20532a38dfb02ce8a4c0 | 0f4e51245f33175b4fb311f633d3821210741bdd | Upgrade to Python 3.12. ### Describe your change:
Repeats #8777
Repeats #9306
* #8777
* #9306
**Disables** algorithms based on `qiskit` and `tensorflow` because those modules are not yet compatible with Python 3.12.
* https://github.com/tensorflow/tensorflow/releases
* https://github.com/Qiskit/qiskit/issues/10887
---
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
| """
https://en.wikipedia.org/wiki/Component_(graph_theory)
Finding connected components in graph
"""
test_graph_1 = {0: [1, 2], 1: [0, 3], 2: [0], 3: [1], 4: [5, 6], 5: [4, 6], 6: [4, 5]}
test_graph_2 = {0: [1, 2, 3], 1: [0, 3], 2: [0], 3: [0, 1], 4: [], 5: []}
def dfs(graph: dict, vert: int, visited: list) -> list:
"""
Use depth first search to find all vertices
being in the same component as initial vertex
>>> dfs(test_graph_1, 0, 5 * [False])
[0, 1, 3, 2]
>>> dfs(test_graph_2, 0, 6 * [False])
[0, 1, 3, 2]
"""
visited[vert] = True
connected_verts = []
for neighbour in graph[vert]:
if not visited[neighbour]:
connected_verts += dfs(graph, neighbour, visited)
return [vert, *connected_verts]
def connected_components(graph: dict) -> list:
"""
This function takes graph as a parameter
and then returns the list of connected components
>>> connected_components(test_graph_1)
[[0, 1, 3, 2], [4, 5, 6]]
>>> connected_components(test_graph_2)
[[0, 1, 3, 2], [4], [5]]
"""
graph_size = len(graph)
visited = graph_size * [False]
components_list = []
for i in range(graph_size):
if not visited[i]:
i_connected = dfs(graph, i, visited)
components_list.append(i_connected)
return components_list
if __name__ == "__main__":
import doctest
doctest.testmod()
| """
https://en.wikipedia.org/wiki/Component_(graph_theory)
Finding connected components in graph
"""
test_graph_1 = {0: [1, 2], 1: [0, 3], 2: [0], 3: [1], 4: [5, 6], 5: [4, 6], 6: [4, 5]}
test_graph_2 = {0: [1, 2, 3], 1: [0, 3], 2: [0], 3: [0, 1], 4: [], 5: []}
def dfs(graph: dict, vert: int, visited: list) -> list:
"""
Use depth first search to find all vertices
being in the same component as initial vertex
>>> dfs(test_graph_1, 0, 5 * [False])
[0, 1, 3, 2]
>>> dfs(test_graph_2, 0, 6 * [False])
[0, 1, 3, 2]
"""
visited[vert] = True
connected_verts = []
for neighbour in graph[vert]:
if not visited[neighbour]:
connected_verts += dfs(graph, neighbour, visited)
return [vert, *connected_verts]
def connected_components(graph: dict) -> list:
"""
This function takes graph as a parameter
and then returns the list of connected components
>>> connected_components(test_graph_1)
[[0, 1, 3, 2], [4, 5, 6]]
>>> connected_components(test_graph_2)
[[0, 1, 3, 2], [4], [5]]
"""
graph_size = len(graph)
visited = graph_size * [False]
components_list = []
for i in range(graph_size):
if not visited[i]:
i_connected = dfs(graph, i, visited)
components_list.append(i_connected)
return components_list
if __name__ == "__main__":
import doctest
doctest.testmod()
| -1 |
TheAlgorithms/Python | 9,576 | Upgrade to Python 3.12 | ### Describe your change:
Repeats #8777
Repeats #9306
* #8777
* #9306
**Disables** algorithms based on `qiskit` and `tensorflow` because those modules are not yet compatible with Python 3.12.
* https://github.com/tensorflow/tensorflow/releases
* https://github.com/Qiskit/qiskit/issues/10887
---
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
| cclauss | "2023-10-03T08:13:43Z" | "2023-10-03T09:17:11Z" | f964dcbf2ff7c70e4aca20532a38dfb02ce8a4c0 | 0f4e51245f33175b4fb311f633d3821210741bdd | Upgrade to Python 3.12. ### Describe your change:
Repeats #8777
Repeats #9306
* #8777
* #9306
**Disables** algorithms based on `qiskit` and `tensorflow` because those modules are not yet compatible with Python 3.12.
* https://github.com/tensorflow/tensorflow/releases
* https://github.com/Qiskit/qiskit/issues/10887
---
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
| from typing import Any
def mode(input_list: list) -> list[Any]:
"""This function returns the mode(Mode as in the measures of
central tendency) of the input data.
The input list may contain any Datastructure or any Datatype.
>>> mode([2, 3, 4, 5, 3, 4, 2, 5, 2, 2, 4, 2, 2, 2])
[2]
>>> mode([3, 4, 5, 3, 4, 2, 5, 2, 2, 4, 4, 2, 2, 2])
[2]
>>> mode([3, 4, 5, 3, 4, 2, 5, 2, 2, 4, 4, 4, 2, 2, 4, 2])
[2, 4]
>>> mode(["x", "y", "y", "z"])
['y']
>>> mode(["x", "x" , "y", "y", "z"])
['x', 'y']
"""
if not input_list:
return []
result = [input_list.count(value) for value in input_list]
y = max(result) # Gets the maximum count in the input list.
# Gets values of modes
return sorted({input_list[i] for i, value in enumerate(result) if value == y})
if __name__ == "__main__":
import doctest
doctest.testmod()
| from typing import Any
def mode(input_list: list) -> list[Any]:
"""This function returns the mode(Mode as in the measures of
central tendency) of the input data.
The input list may contain any Datastructure or any Datatype.
>>> mode([2, 3, 4, 5, 3, 4, 2, 5, 2, 2, 4, 2, 2, 2])
[2]
>>> mode([3, 4, 5, 3, 4, 2, 5, 2, 2, 4, 4, 2, 2, 2])
[2]
>>> mode([3, 4, 5, 3, 4, 2, 5, 2, 2, 4, 4, 4, 2, 2, 4, 2])
[2, 4]
>>> mode(["x", "y", "y", "z"])
['y']
>>> mode(["x", "x" , "y", "y", "z"])
['x', 'y']
"""
if not input_list:
return []
result = [input_list.count(value) for value in input_list]
y = max(result) # Gets the maximum count in the input list.
# Gets values of modes
return sorted({input_list[i] for i, value in enumerate(result) if value == y})
if __name__ == "__main__":
import doctest
doctest.testmod()
| -1 |
TheAlgorithms/Python | 9,576 | Upgrade to Python 3.12 | ### Describe your change:
Repeats #8777
Repeats #9306
* #8777
* #9306
**Disables** algorithms based on `qiskit` and `tensorflow` because those modules are not yet compatible with Python 3.12.
* https://github.com/tensorflow/tensorflow/releases
* https://github.com/Qiskit/qiskit/issues/10887
---
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
| cclauss | "2023-10-03T08:13:43Z" | "2023-10-03T09:17:11Z" | f964dcbf2ff7c70e4aca20532a38dfb02ce8a4c0 | 0f4e51245f33175b4fb311f633d3821210741bdd | Upgrade to Python 3.12. ### Describe your change:
Repeats #8777
Repeats #9306
* #8777
* #9306
**Disables** algorithms based on `qiskit` and `tensorflow` because those modules are not yet compatible with Python 3.12.
* https://github.com/tensorflow/tensorflow/releases
* https://github.com/Qiskit/qiskit/issues/10887
---
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
| """
Given a grid, where you start from the top left position [0, 0],
you want to find how many paths you can take to get to the bottom right position.
start here -> 0 0 0 0
1 1 0 0
0 0 0 1
0 1 0 0 <- finish here
how many 'distinct' paths can you take to get to the finish?
Using a recursive depth-first search algorithm below, you are able to
find the number of distinct unique paths (count).
'*' will demonstrate a path
In the example above, there are two distinct paths:
1. 2.
* * * 0 * * * *
1 1 * 0 1 1 * *
0 0 * 1 0 0 * 1
0 1 * * 0 1 * *
"""
def depth_first_search(grid: list[list[int]], row: int, col: int, visit: set) -> int:
"""
Recursive Backtracking Depth First Search Algorithm
Starting from top left of a matrix, count the number of
paths that can reach the bottom right of a matrix.
1 represents a block (inaccessible)
0 represents a valid space (accessible)
0 0 0 0
1 1 0 0
0 0 0 1
0 1 0 0
>>> grid = [[0, 0, 0, 0], [1, 1, 0, 0], [0, 0, 0, 1], [0, 1, 0, 0]]
>>> depth_first_search(grid, 0, 0, set())
2
0 0 0 0 0
0 1 1 1 0
0 1 1 1 0
0 0 0 0 0
>>> grid = [[0, 0, 0, 0, 0], [0, 1, 1, 1, 0], [0, 1, 1, 1, 0], [0, 0, 0, 0, 0]]
>>> depth_first_search(grid, 0, 0, set())
2
"""
row_length, col_length = len(grid), len(grid[0])
if (
min(row, col) < 0
or row == row_length
or col == col_length
or (row, col) in visit
or grid[row][col] == 1
):
return 0
if row == row_length - 1 and col == col_length - 1:
return 1
visit.add((row, col))
count = 0
count += depth_first_search(grid, row + 1, col, visit)
count += depth_first_search(grid, row - 1, col, visit)
count += depth_first_search(grid, row, col + 1, visit)
count += depth_first_search(grid, row, col - 1, visit)
visit.remove((row, col))
return count
if __name__ == "__main__":
import doctest
doctest.testmod()
| """
Given a grid, where you start from the top left position [0, 0],
you want to find how many paths you can take to get to the bottom right position.
start here -> 0 0 0 0
1 1 0 0
0 0 0 1
0 1 0 0 <- finish here
how many 'distinct' paths can you take to get to the finish?
Using a recursive depth-first search algorithm below, you are able to
find the number of distinct unique paths (count).
'*' will demonstrate a path
In the example above, there are two distinct paths:
1. 2.
* * * 0 * * * *
1 1 * 0 1 1 * *
0 0 * 1 0 0 * 1
0 1 * * 0 1 * *
"""
def depth_first_search(grid: list[list[int]], row: int, col: int, visit: set) -> int:
"""
Recursive Backtracking Depth First Search Algorithm
Starting from top left of a matrix, count the number of
paths that can reach the bottom right of a matrix.
1 represents a block (inaccessible)
0 represents a valid space (accessible)
0 0 0 0
1 1 0 0
0 0 0 1
0 1 0 0
>>> grid = [[0, 0, 0, 0], [1, 1, 0, 0], [0, 0, 0, 1], [0, 1, 0, 0]]
>>> depth_first_search(grid, 0, 0, set())
2
0 0 0 0 0
0 1 1 1 0
0 1 1 1 0
0 0 0 0 0
>>> grid = [[0, 0, 0, 0, 0], [0, 1, 1, 1, 0], [0, 1, 1, 1, 0], [0, 0, 0, 0, 0]]
>>> depth_first_search(grid, 0, 0, set())
2
"""
row_length, col_length = len(grid), len(grid[0])
if (
min(row, col) < 0
or row == row_length
or col == col_length
or (row, col) in visit
or grid[row][col] == 1
):
return 0
if row == row_length - 1 and col == col_length - 1:
return 1
visit.add((row, col))
count = 0
count += depth_first_search(grid, row + 1, col, visit)
count += depth_first_search(grid, row - 1, col, visit)
count += depth_first_search(grid, row, col + 1, visit)
count += depth_first_search(grid, row, col - 1, visit)
visit.remove((row, col))
return count
if __name__ == "__main__":
import doctest
doctest.testmod()
| -1 |
TheAlgorithms/Python | 9,576 | Upgrade to Python 3.12 | ### Describe your change:
Repeats #8777
Repeats #9306
* #8777
* #9306
**Disables** algorithms based on `qiskit` and `tensorflow` because those modules are not yet compatible with Python 3.12.
* https://github.com/tensorflow/tensorflow/releases
* https://github.com/Qiskit/qiskit/issues/10887
---
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
| cclauss | "2023-10-03T08:13:43Z" | "2023-10-03T09:17:11Z" | f964dcbf2ff7c70e4aca20532a38dfb02ce8a4c0 | 0f4e51245f33175b4fb311f633d3821210741bdd | Upgrade to Python 3.12. ### Describe your change:
Repeats #8777
Repeats #9306
* #8777
* #9306
**Disables** algorithms based on `qiskit` and `tensorflow` because those modules are not yet compatible with Python 3.12.
* https://github.com/tensorflow/tensorflow/releases
* https://github.com/Qiskit/qiskit/issues/10887
---
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
| import requests
from bs4 import BeautifulSoup, NavigableString, Tag
from fake_useragent import UserAgent
BASE_URL = "https://ww1.gogoanime2.org"
def search_scraper(anime_name: str) -> list:
"""[summary]
Take an url and
return list of anime after scraping the site.
>>> type(search_scraper("demon_slayer"))
<class 'list'>
Args:
anime_name (str): [Name of anime]
Raises:
e: [Raises exception on failure]
Returns:
[list]: [List of animes]
"""
# concat the name to form the search url.
search_url = f"{BASE_URL}/search/{anime_name}"
response = requests.get(
search_url, headers={"UserAgent": UserAgent().chrome}
) # request the url.
# Is the response ok?
response.raise_for_status()
# parse with soup.
soup = BeautifulSoup(response.text, "html.parser")
# get list of anime
anime_ul = soup.find("ul", {"class": "items"})
if anime_ul is None or isinstance(anime_ul, NavigableString):
msg = f"Could not find and anime with name {anime_name}"
raise ValueError(msg)
anime_li = anime_ul.children
# for each anime, insert to list. the name and url.
anime_list = []
for anime in anime_li:
if isinstance(anime, Tag):
anime_url = anime.find("a")
if anime_url is None or isinstance(anime_url, NavigableString):
continue
anime_title = anime.find("a")
if anime_title is None or isinstance(anime_title, NavigableString):
continue
anime_list.append({"title": anime_title["title"], "url": anime_url["href"]})
return anime_list
def search_anime_episode_list(episode_endpoint: str) -> list:
"""[summary]
Take an url and
return list of episodes after scraping the site
for an url.
>>> type(search_anime_episode_list("/anime/kimetsu-no-yaiba"))
<class 'list'>
Args:
episode_endpoint (str): [Endpoint of episode]
Raises:
e: [description]
Returns:
[list]: [List of episodes]
"""
request_url = f"{BASE_URL}{episode_endpoint}"
response = requests.get(url=request_url, headers={"UserAgent": UserAgent().chrome})
response.raise_for_status()
soup = BeautifulSoup(response.text, "html.parser")
# With this id. get the episode list.
episode_page_ul = soup.find("ul", {"id": "episode_related"})
if episode_page_ul is None or isinstance(episode_page_ul, NavigableString):
msg = f"Could not find any anime eposiodes with name {anime_name}"
raise ValueError(msg)
episode_page_li = episode_page_ul.children
episode_list = []
for episode in episode_page_li:
if isinstance(episode, Tag):
url = episode.find("a")
if url is None or isinstance(url, NavigableString):
continue
title = episode.find("div", {"class": "name"})
if title is None or isinstance(title, NavigableString):
continue
episode_list.append(
{"title": title.text.replace(" ", ""), "url": url["href"]}
)
return episode_list
def get_anime_episode(episode_endpoint: str) -> list:
"""[summary]
Get click url and download url from episode url
>>> type(get_anime_episode("/watch/kimetsu-no-yaiba/1"))
<class 'list'>
Args:
episode_endpoint (str): [Endpoint of episode]
Raises:
e: [description]
Returns:
[list]: [List of download and watch url]
"""
episode_page_url = f"{BASE_URL}{episode_endpoint}"
response = requests.get(
url=episode_page_url, headers={"User-Agent": UserAgent().chrome}
)
response.raise_for_status()
soup = BeautifulSoup(response.text, "html.parser")
url = soup.find("iframe", {"id": "playerframe"})
if url is None or isinstance(url, NavigableString):
msg = f"Could not find url and download url from {episode_endpoint}"
raise RuntimeError(msg)
episode_url = url["src"]
if not isinstance(episode_url, str):
msg = f"Could not find url and download url from {episode_endpoint}"
raise RuntimeError(msg)
download_url = episode_url.replace("/embed/", "/playlist/") + ".m3u8"
return [f"{BASE_URL}{episode_url}", f"{BASE_URL}{download_url}"]
if __name__ == "__main__":
anime_name = input("Enter anime name: ").strip()
anime_list = search_scraper(anime_name)
print("\n")
if len(anime_list) == 0:
print("No anime found with this name")
else:
print(f"Found {len(anime_list)} results: ")
for i, anime in enumerate(anime_list):
anime_title = anime["title"]
print(f"{i+1}. {anime_title}")
anime_choice = int(input("\nPlease choose from the following list: ").strip())
chosen_anime = anime_list[anime_choice - 1]
print(f"You chose {chosen_anime['title']}. Searching for episodes...")
episode_list = search_anime_episode_list(chosen_anime["url"])
if len(episode_list) == 0:
print("No episode found for this anime")
else:
print(f"Found {len(episode_list)} results: ")
for i, episode in enumerate(episode_list):
print(f"{i+1}. {episode['title']}")
episode_choice = int(input("\nChoose an episode by serial no: ").strip())
chosen_episode = episode_list[episode_choice - 1]
print(f"You chose {chosen_episode['title']}. Searching...")
episode_url, download_url = get_anime_episode(chosen_episode["url"])
print(f"\nTo watch, ctrl+click on {episode_url}.")
print(f"To download, ctrl+click on {download_url}.")
| import requests
from bs4 import BeautifulSoup, NavigableString, Tag
from fake_useragent import UserAgent
BASE_URL = "https://ww1.gogoanime2.org"
def search_scraper(anime_name: str) -> list:
"""[summary]
Take an url and
return list of anime after scraping the site.
>>> type(search_scraper("demon_slayer"))
<class 'list'>
Args:
anime_name (str): [Name of anime]
Raises:
e: [Raises exception on failure]
Returns:
[list]: [List of animes]
"""
# concat the name to form the search url.
search_url = f"{BASE_URL}/search/{anime_name}"
response = requests.get(
search_url, headers={"UserAgent": UserAgent().chrome}
) # request the url.
# Is the response ok?
response.raise_for_status()
# parse with soup.
soup = BeautifulSoup(response.text, "html.parser")
# get list of anime
anime_ul = soup.find("ul", {"class": "items"})
if anime_ul is None or isinstance(anime_ul, NavigableString):
msg = f"Could not find and anime with name {anime_name}"
raise ValueError(msg)
anime_li = anime_ul.children
# for each anime, insert to list. the name and url.
anime_list = []
for anime in anime_li:
if isinstance(anime, Tag):
anime_url = anime.find("a")
if anime_url is None or isinstance(anime_url, NavigableString):
continue
anime_title = anime.find("a")
if anime_title is None or isinstance(anime_title, NavigableString):
continue
anime_list.append({"title": anime_title["title"], "url": anime_url["href"]})
return anime_list
def search_anime_episode_list(episode_endpoint: str) -> list:
"""[summary]
Take an url and
return list of episodes after scraping the site
for an url.
>>> type(search_anime_episode_list("/anime/kimetsu-no-yaiba"))
<class 'list'>
Args:
episode_endpoint (str): [Endpoint of episode]
Raises:
e: [description]
Returns:
[list]: [List of episodes]
"""
request_url = f"{BASE_URL}{episode_endpoint}"
response = requests.get(url=request_url, headers={"UserAgent": UserAgent().chrome})
response.raise_for_status()
soup = BeautifulSoup(response.text, "html.parser")
# With this id. get the episode list.
episode_page_ul = soup.find("ul", {"id": "episode_related"})
if episode_page_ul is None or isinstance(episode_page_ul, NavigableString):
msg = f"Could not find any anime eposiodes with name {anime_name}"
raise ValueError(msg)
episode_page_li = episode_page_ul.children
episode_list = []
for episode in episode_page_li:
if isinstance(episode, Tag):
url = episode.find("a")
if url is None or isinstance(url, NavigableString):
continue
title = episode.find("div", {"class": "name"})
if title is None or isinstance(title, NavigableString):
continue
episode_list.append(
{"title": title.text.replace(" ", ""), "url": url["href"]}
)
return episode_list
def get_anime_episode(episode_endpoint: str) -> list:
"""[summary]
Get click url and download url from episode url
>>> type(get_anime_episode("/watch/kimetsu-no-yaiba/1"))
<class 'list'>
Args:
episode_endpoint (str): [Endpoint of episode]
Raises:
e: [description]
Returns:
[list]: [List of download and watch url]
"""
episode_page_url = f"{BASE_URL}{episode_endpoint}"
response = requests.get(
url=episode_page_url, headers={"User-Agent": UserAgent().chrome}
)
response.raise_for_status()
soup = BeautifulSoup(response.text, "html.parser")
url = soup.find("iframe", {"id": "playerframe"})
if url is None or isinstance(url, NavigableString):
msg = f"Could not find url and download url from {episode_endpoint}"
raise RuntimeError(msg)
episode_url = url["src"]
if not isinstance(episode_url, str):
msg = f"Could not find url and download url from {episode_endpoint}"
raise RuntimeError(msg)
download_url = episode_url.replace("/embed/", "/playlist/") + ".m3u8"
return [f"{BASE_URL}{episode_url}", f"{BASE_URL}{download_url}"]
if __name__ == "__main__":
anime_name = input("Enter anime name: ").strip()
anime_list = search_scraper(anime_name)
print("\n")
if len(anime_list) == 0:
print("No anime found with this name")
else:
print(f"Found {len(anime_list)} results: ")
for i, anime in enumerate(anime_list):
anime_title = anime["title"]
print(f"{i+1}. {anime_title}")
anime_choice = int(input("\nPlease choose from the following list: ").strip())
chosen_anime = anime_list[anime_choice - 1]
print(f"You chose {chosen_anime['title']}. Searching for episodes...")
episode_list = search_anime_episode_list(chosen_anime["url"])
if len(episode_list) == 0:
print("No episode found for this anime")
else:
print(f"Found {len(episode_list)} results: ")
for i, episode in enumerate(episode_list):
print(f"{i+1}. {episode['title']}")
episode_choice = int(input("\nChoose an episode by serial no: ").strip())
chosen_episode = episode_list[episode_choice - 1]
print(f"You chose {chosen_episode['title']}. Searching...")
episode_url, download_url = get_anime_episode(chosen_episode["url"])
print(f"\nTo watch, ctrl+click on {episode_url}.")
print(f"To download, ctrl+click on {download_url}.")
| -1 |
TheAlgorithms/Python | 9,576 | Upgrade to Python 3.12 | ### Describe your change:
Repeats #8777
Repeats #9306
* #8777
* #9306
**Disables** algorithms based on `qiskit` and `tensorflow` because those modules are not yet compatible with Python 3.12.
* https://github.com/tensorflow/tensorflow/releases
* https://github.com/Qiskit/qiskit/issues/10887
---
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
| cclauss | "2023-10-03T08:13:43Z" | "2023-10-03T09:17:11Z" | f964dcbf2ff7c70e4aca20532a38dfb02ce8a4c0 | 0f4e51245f33175b4fb311f633d3821210741bdd | Upgrade to Python 3.12. ### Describe your change:
Repeats #8777
Repeats #9306
* #8777
* #9306
**Disables** algorithms based on `qiskit` and `tensorflow` because those modules are not yet compatible with Python 3.12.
* https://github.com/tensorflow/tensorflow/releases
* https://github.com/Qiskit/qiskit/issues/10887
---
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
| """
Convert speed units
https://en.wikipedia.org/wiki/Kilometres_per_hour
https://en.wikipedia.org/wiki/Miles_per_hour
https://en.wikipedia.org/wiki/Knot_(unit)
https://en.wikipedia.org/wiki/Metre_per_second
"""
speed_chart: dict[str, float] = {
"km/h": 1.0,
"m/s": 3.6,
"mph": 1.609344,
"knot": 1.852,
}
speed_chart_inverse: dict[str, float] = {
"km/h": 1.0,
"m/s": 0.277777778,
"mph": 0.621371192,
"knot": 0.539956803,
}
def convert_speed(speed: float, unit_from: str, unit_to: str) -> float:
"""
Convert speed from one unit to another using the speed_chart above.
"km/h": 1.0,
"m/s": 3.6,
"mph": 1.609344,
"knot": 1.852,
>>> convert_speed(100, "km/h", "m/s")
27.778
>>> convert_speed(100, "km/h", "mph")
62.137
>>> convert_speed(100, "km/h", "knot")
53.996
>>> convert_speed(100, "m/s", "km/h")
360.0
>>> convert_speed(100, "m/s", "mph")
223.694
>>> convert_speed(100, "m/s", "knot")
194.384
>>> convert_speed(100, "mph", "km/h")
160.934
>>> convert_speed(100, "mph", "m/s")
44.704
>>> convert_speed(100, "mph", "knot")
86.898
>>> convert_speed(100, "knot", "km/h")
185.2
>>> convert_speed(100, "knot", "m/s")
51.444
>>> convert_speed(100, "knot", "mph")
115.078
"""
if unit_to not in speed_chart or unit_from not in speed_chart_inverse:
msg = (
f"Incorrect 'from_type' or 'to_type' value: {unit_from!r}, {unit_to!r}\n"
f"Valid values are: {', '.join(speed_chart_inverse)}"
)
raise ValueError(msg)
return round(speed * speed_chart[unit_from] * speed_chart_inverse[unit_to], 3)
if __name__ == "__main__":
import doctest
doctest.testmod()
| """
Convert speed units
https://en.wikipedia.org/wiki/Kilometres_per_hour
https://en.wikipedia.org/wiki/Miles_per_hour
https://en.wikipedia.org/wiki/Knot_(unit)
https://en.wikipedia.org/wiki/Metre_per_second
"""
speed_chart: dict[str, float] = {
"km/h": 1.0,
"m/s": 3.6,
"mph": 1.609344,
"knot": 1.852,
}
speed_chart_inverse: dict[str, float] = {
"km/h": 1.0,
"m/s": 0.277777778,
"mph": 0.621371192,
"knot": 0.539956803,
}
def convert_speed(speed: float, unit_from: str, unit_to: str) -> float:
"""
Convert speed from one unit to another using the speed_chart above.
"km/h": 1.0,
"m/s": 3.6,
"mph": 1.609344,
"knot": 1.852,
>>> convert_speed(100, "km/h", "m/s")
27.778
>>> convert_speed(100, "km/h", "mph")
62.137
>>> convert_speed(100, "km/h", "knot")
53.996
>>> convert_speed(100, "m/s", "km/h")
360.0
>>> convert_speed(100, "m/s", "mph")
223.694
>>> convert_speed(100, "m/s", "knot")
194.384
>>> convert_speed(100, "mph", "km/h")
160.934
>>> convert_speed(100, "mph", "m/s")
44.704
>>> convert_speed(100, "mph", "knot")
86.898
>>> convert_speed(100, "knot", "km/h")
185.2
>>> convert_speed(100, "knot", "m/s")
51.444
>>> convert_speed(100, "knot", "mph")
115.078
"""
if unit_to not in speed_chart or unit_from not in speed_chart_inverse:
msg = (
f"Incorrect 'from_type' or 'to_type' value: {unit_from!r}, {unit_to!r}\n"
f"Valid values are: {', '.join(speed_chart_inverse)}"
)
raise ValueError(msg)
return round(speed * speed_chart[unit_from] * speed_chart_inverse[unit_to], 3)
if __name__ == "__main__":
import doctest
doctest.testmod()
| -1 |
TheAlgorithms/Python | 9,576 | Upgrade to Python 3.12 | ### Describe your change:
Repeats #8777
Repeats #9306
* #8777
* #9306
**Disables** algorithms based on `qiskit` and `tensorflow` because those modules are not yet compatible with Python 3.12.
* https://github.com/tensorflow/tensorflow/releases
* https://github.com/Qiskit/qiskit/issues/10887
---
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
| cclauss | "2023-10-03T08:13:43Z" | "2023-10-03T09:17:11Z" | f964dcbf2ff7c70e4aca20532a38dfb02ce8a4c0 | 0f4e51245f33175b4fb311f633d3821210741bdd | Upgrade to Python 3.12. ### Describe your change:
Repeats #8777
Repeats #9306
* #8777
* #9306
**Disables** algorithms based on `qiskit` and `tensorflow` because those modules are not yet compatible with Python 3.12.
* https://github.com/tensorflow/tensorflow/releases
* https://github.com/Qiskit/qiskit/issues/10887
---
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
| from json import loads
from pathlib import Path
import numpy as np
from yulewalker import yulewalk
from audio_filters.butterworth_filter import make_highpass
from audio_filters.iir_filter import IIRFilter
data = loads((Path(__file__).resolve().parent / "loudness_curve.json").read_text())
class EqualLoudnessFilter:
r"""
An equal-loudness filter which compensates for the human ear's non-linear response
to sound.
This filter corrects this by cascading a yulewalk filter and a butterworth filter.
Designed for use with samplerate of 44.1kHz and above. If you're using a lower
samplerate, use with caution.
Code based on matlab implementation at https://bit.ly/3eqh2HU
(url shortened for ruff)
Target curve: https://i.imgur.com/3g2VfaM.png
Yulewalk response: https://i.imgur.com/J9LnJ4C.png
Butterworth and overall response: https://i.imgur.com/3g2VfaM.png
Images and original matlab implementation by David Robinson, 2001
"""
def __init__(self, samplerate: int = 44100) -> None:
self.yulewalk_filter = IIRFilter(10)
self.butterworth_filter = make_highpass(150, samplerate)
# pad the data to nyquist
curve_freqs = np.array(data["frequencies"] + [max(20000.0, samplerate / 2)])
curve_gains = np.array(data["gains"] + [140])
# Convert to angular frequency
freqs_normalized = curve_freqs / samplerate * 2
# Invert the curve and normalize to 0dB
gains_normalized = np.power(10, (np.min(curve_gains) - curve_gains) / 20)
# Scipy's `yulewalk` function is a stub, so we're using the
# `yulewalker` library instead.
# This function computes the coefficients using a least-squares
# fit to the specified curve.
ya, yb = yulewalk(10, freqs_normalized, gains_normalized)
self.yulewalk_filter.set_coefficients(ya, yb)
def process(self, sample: float) -> float:
"""
Process a single sample through both filters
>>> filt = EqualLoudnessFilter()
>>> filt.process(0.0)
0.0
"""
tmp = self.yulewalk_filter.process(sample)
return self.butterworth_filter.process(tmp)
| from json import loads
from pathlib import Path
import numpy as np
from yulewalker import yulewalk
from audio_filters.butterworth_filter import make_highpass
from audio_filters.iir_filter import IIRFilter
data = loads((Path(__file__).resolve().parent / "loudness_curve.json").read_text())
class EqualLoudnessFilter:
r"""
An equal-loudness filter which compensates for the human ear's non-linear response
to sound.
This filter corrects this by cascading a yulewalk filter and a butterworth filter.
Designed for use with samplerate of 44.1kHz and above. If you're using a lower
samplerate, use with caution.
Code based on matlab implementation at https://bit.ly/3eqh2HU
(url shortened for ruff)
Target curve: https://i.imgur.com/3g2VfaM.png
Yulewalk response: https://i.imgur.com/J9LnJ4C.png
Butterworth and overall response: https://i.imgur.com/3g2VfaM.png
Images and original matlab implementation by David Robinson, 2001
"""
def __init__(self, samplerate: int = 44100) -> None:
self.yulewalk_filter = IIRFilter(10)
self.butterworth_filter = make_highpass(150, samplerate)
# pad the data to nyquist
curve_freqs = np.array(data["frequencies"] + [max(20000.0, samplerate / 2)])
curve_gains = np.array(data["gains"] + [140])
# Convert to angular frequency
freqs_normalized = curve_freqs / samplerate * 2
# Invert the curve and normalize to 0dB
gains_normalized = np.power(10, (np.min(curve_gains) - curve_gains) / 20)
# Scipy's `yulewalk` function is a stub, so we're using the
# `yulewalker` library instead.
# This function computes the coefficients using a least-squares
# fit to the specified curve.
ya, yb = yulewalk(10, freqs_normalized, gains_normalized)
self.yulewalk_filter.set_coefficients(ya, yb)
def process(self, sample: float) -> float:
"""
Process a single sample through both filters
>>> filt = EqualLoudnessFilter()
>>> filt.process(0.0)
0.0
"""
tmp = self.yulewalk_filter.process(sample)
return self.butterworth_filter.process(tmp)
| -1 |
TheAlgorithms/Python | 9,576 | Upgrade to Python 3.12 | ### Describe your change:
Repeats #8777
Repeats #9306
* #8777
* #9306
**Disables** algorithms based on `qiskit` and `tensorflow` because those modules are not yet compatible with Python 3.12.
* https://github.com/tensorflow/tensorflow/releases
* https://github.com/Qiskit/qiskit/issues/10887
---
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
| cclauss | "2023-10-03T08:13:43Z" | "2023-10-03T09:17:11Z" | f964dcbf2ff7c70e4aca20532a38dfb02ce8a4c0 | 0f4e51245f33175b4fb311f633d3821210741bdd | Upgrade to Python 3.12. ### Describe your change:
Repeats #8777
Repeats #9306
* #8777
* #9306
**Disables** algorithms based on `qiskit` and `tensorflow` because those modules are not yet compatible with Python 3.12.
* https://github.com/tensorflow/tensorflow/releases
* https://github.com/Qiskit/qiskit/issues/10887
---
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
| """
Project Euler Problem 75: https://projecteuler.net/problem=75
It turns out that 12 cm is the smallest length of wire that can be bent to form an
integer sided right angle triangle in exactly one way, but there are many more examples.
12 cm: (3,4,5)
24 cm: (6,8,10)
30 cm: (5,12,13)
36 cm: (9,12,15)
40 cm: (8,15,17)
48 cm: (12,16,20)
In contrast, some lengths of wire, like 20 cm, cannot be bent to form an integer sided
right angle triangle, and other lengths allow more than one solution to be found; for
example, using 120 cm it is possible to form exactly three different integer sided
right angle triangles.
120 cm: (30,40,50), (20,48,52), (24,45,51)
Given that L is the length of the wire, for how many values of L β€ 1,500,000 can
exactly one integer sided right angle triangle be formed?
Solution: we generate all pythagorean triples using Euclid's formula and
keep track of the frequencies of the perimeters.
Reference: https://en.wikipedia.org/wiki/Pythagorean_triple#Generating_a_triple
"""
from collections import defaultdict
from math import gcd
def solution(limit: int = 1500000) -> int:
"""
Return the number of values of L <= limit such that a wire of length L can be
formmed into an integer sided right angle triangle in exactly one way.
>>> solution(50)
6
>>> solution(1000)
112
>>> solution(50000)
5502
"""
frequencies: defaultdict = defaultdict(int)
euclid_m = 2
while 2 * euclid_m * (euclid_m + 1) <= limit:
for euclid_n in range((euclid_m % 2) + 1, euclid_m, 2):
if gcd(euclid_m, euclid_n) > 1:
continue
primitive_perimeter = 2 * euclid_m * (euclid_m + euclid_n)
for perimeter in range(primitive_perimeter, limit + 1, primitive_perimeter):
frequencies[perimeter] += 1
euclid_m += 1
return sum(1 for frequency in frequencies.values() if frequency == 1)
if __name__ == "__main__":
print(f"{solution() = }")
| """
Project Euler Problem 75: https://projecteuler.net/problem=75
It turns out that 12 cm is the smallest length of wire that can be bent to form an
integer sided right angle triangle in exactly one way, but there are many more examples.
12 cm: (3,4,5)
24 cm: (6,8,10)
30 cm: (5,12,13)
36 cm: (9,12,15)
40 cm: (8,15,17)
48 cm: (12,16,20)
In contrast, some lengths of wire, like 20 cm, cannot be bent to form an integer sided
right angle triangle, and other lengths allow more than one solution to be found; for
example, using 120 cm it is possible to form exactly three different integer sided
right angle triangles.
120 cm: (30,40,50), (20,48,52), (24,45,51)
Given that L is the length of the wire, for how many values of L β€ 1,500,000 can
exactly one integer sided right angle triangle be formed?
Solution: we generate all pythagorean triples using Euclid's formula and
keep track of the frequencies of the perimeters.
Reference: https://en.wikipedia.org/wiki/Pythagorean_triple#Generating_a_triple
"""
from collections import defaultdict
from math import gcd
def solution(limit: int = 1500000) -> int:
"""
Return the number of values of L <= limit such that a wire of length L can be
formmed into an integer sided right angle triangle in exactly one way.
>>> solution(50)
6
>>> solution(1000)
112
>>> solution(50000)
5502
"""
frequencies: defaultdict = defaultdict(int)
euclid_m = 2
while 2 * euclid_m * (euclid_m + 1) <= limit:
for euclid_n in range((euclid_m % 2) + 1, euclid_m, 2):
if gcd(euclid_m, euclid_n) > 1:
continue
primitive_perimeter = 2 * euclid_m * (euclid_m + euclid_n)
for perimeter in range(primitive_perimeter, limit + 1, primitive_perimeter):
frequencies[perimeter] += 1
euclid_m += 1
return sum(1 for frequency in frequencies.values() if frequency == 1)
if __name__ == "__main__":
print(f"{solution() = }")
| -1 |
TheAlgorithms/Python | 9,576 | Upgrade to Python 3.12 | ### Describe your change:
Repeats #8777
Repeats #9306
* #8777
* #9306
**Disables** algorithms based on `qiskit` and `tensorflow` because those modules are not yet compatible with Python 3.12.
* https://github.com/tensorflow/tensorflow/releases
* https://github.com/Qiskit/qiskit/issues/10887
---
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
| cclauss | "2023-10-03T08:13:43Z" | "2023-10-03T09:17:11Z" | f964dcbf2ff7c70e4aca20532a38dfb02ce8a4c0 | 0f4e51245f33175b4fb311f633d3821210741bdd | Upgrade to Python 3.12. ### Describe your change:
Repeats #8777
Repeats #9306
* #8777
* #9306
**Disables** algorithms based on `qiskit` and `tensorflow` because those modules are not yet compatible with Python 3.12.
* https://github.com/tensorflow/tensorflow/releases
* https://github.com/Qiskit/qiskit/issues/10887
---
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
| """
A harshad number (or more specifically an n-harshad number) is a number that's
divisible by the sum of its digits in some given base n.
Reference: https://en.wikipedia.org/wiki/Harshad_number
"""
def int_to_base(number: int, base: int) -> str:
"""
Convert a given positive decimal integer to base 'base'.
Where 'base' ranges from 2 to 36.
Examples:
>>> int_to_base(23, 2)
'10111'
>>> int_to_base(58, 5)
'213'
>>> int_to_base(167, 16)
'A7'
>>> # bases below 2 and beyond 36 will error
>>> int_to_base(98, 1)
Traceback (most recent call last):
...
ValueError: 'base' must be between 2 and 36 inclusive
>>> int_to_base(98, 37)
Traceback (most recent call last):
...
ValueError: 'base' must be between 2 and 36 inclusive
"""
if base < 2 or base > 36:
raise ValueError("'base' must be between 2 and 36 inclusive")
digits = "0123456789ABCDEFGHIJKLMNOPQRSTUVWXYZ"
result = ""
if number < 0:
raise ValueError("number must be a positive integer")
while number > 0:
number, remainder = divmod(number, base)
result = digits[remainder] + result
if result == "":
result = "0"
return result
def sum_of_digits(num: int, base: int) -> str:
"""
Calculate the sum of digit values in a positive integer
converted to the given 'base'.
Where 'base' ranges from 2 to 36.
Examples:
>>> sum_of_digits(103, 12)
'13'
>>> sum_of_digits(1275, 4)
'30'
>>> sum_of_digits(6645, 2)
'1001'
>>> # bases below 2 and beyond 36 will error
>>> sum_of_digits(543, 1)
Traceback (most recent call last):
...
ValueError: 'base' must be between 2 and 36 inclusive
>>> sum_of_digits(543, 37)
Traceback (most recent call last):
...
ValueError: 'base' must be between 2 and 36 inclusive
"""
if base < 2 or base > 36:
raise ValueError("'base' must be between 2 and 36 inclusive")
num_str = int_to_base(num, base)
res = sum(int(char, base) for char in num_str)
res_str = int_to_base(res, base)
return res_str
def harshad_numbers_in_base(limit: int, base: int) -> list[str]:
"""
Finds all Harshad numbers smaller than num in base 'base'.
Where 'base' ranges from 2 to 36.
Examples:
>>> harshad_numbers_in_base(15, 2)
['1', '10', '100', '110', '1000', '1010', '1100']
>>> harshad_numbers_in_base(12, 34)
['1', '2', '3', '4', '5', '6', '7', '8', '9', 'A', 'B']
>>> harshad_numbers_in_base(12, 4)
['1', '2', '3', '10', '12', '20', '21']
>>> # bases below 2 and beyond 36 will error
>>> harshad_numbers_in_base(234, 37)
Traceback (most recent call last):
...
ValueError: 'base' must be between 2 and 36 inclusive
>>> harshad_numbers_in_base(234, 1)
Traceback (most recent call last):
...
ValueError: 'base' must be between 2 and 36 inclusive
"""
if base < 2 or base > 36:
raise ValueError("'base' must be between 2 and 36 inclusive")
if limit < 0:
return []
numbers = [
int_to_base(i, base)
for i in range(1, limit)
if i % int(sum_of_digits(i, base), base) == 0
]
return numbers
def is_harshad_number_in_base(num: int, base: int) -> bool:
"""
Determines whether n in base 'base' is a harshad number.
Where 'base' ranges from 2 to 36.
Examples:
>>> is_harshad_number_in_base(18, 10)
True
>>> is_harshad_number_in_base(21, 10)
True
>>> is_harshad_number_in_base(-21, 5)
False
>>> # bases below 2 and beyond 36 will error
>>> is_harshad_number_in_base(45, 37)
Traceback (most recent call last):
...
ValueError: 'base' must be between 2 and 36 inclusive
>>> is_harshad_number_in_base(45, 1)
Traceback (most recent call last):
...
ValueError: 'base' must be between 2 and 36 inclusive
"""
if base < 2 or base > 36:
raise ValueError("'base' must be between 2 and 36 inclusive")
if num < 0:
return False
n = int_to_base(num, base)
d = sum_of_digits(num, base)
return int(n, base) % int(d, base) == 0
if __name__ == "__main__":
import doctest
doctest.testmod()
| """
A harshad number (or more specifically an n-harshad number) is a number that's
divisible by the sum of its digits in some given base n.
Reference: https://en.wikipedia.org/wiki/Harshad_number
"""
def int_to_base(number: int, base: int) -> str:
"""
Convert a given positive decimal integer to base 'base'.
Where 'base' ranges from 2 to 36.
Examples:
>>> int_to_base(23, 2)
'10111'
>>> int_to_base(58, 5)
'213'
>>> int_to_base(167, 16)
'A7'
>>> # bases below 2 and beyond 36 will error
>>> int_to_base(98, 1)
Traceback (most recent call last):
...
ValueError: 'base' must be between 2 and 36 inclusive
>>> int_to_base(98, 37)
Traceback (most recent call last):
...
ValueError: 'base' must be between 2 and 36 inclusive
"""
if base < 2 or base > 36:
raise ValueError("'base' must be between 2 and 36 inclusive")
digits = "0123456789ABCDEFGHIJKLMNOPQRSTUVWXYZ"
result = ""
if number < 0:
raise ValueError("number must be a positive integer")
while number > 0:
number, remainder = divmod(number, base)
result = digits[remainder] + result
if result == "":
result = "0"
return result
def sum_of_digits(num: int, base: int) -> str:
"""
Calculate the sum of digit values in a positive integer
converted to the given 'base'.
Where 'base' ranges from 2 to 36.
Examples:
>>> sum_of_digits(103, 12)
'13'
>>> sum_of_digits(1275, 4)
'30'
>>> sum_of_digits(6645, 2)
'1001'
>>> # bases below 2 and beyond 36 will error
>>> sum_of_digits(543, 1)
Traceback (most recent call last):
...
ValueError: 'base' must be between 2 and 36 inclusive
>>> sum_of_digits(543, 37)
Traceback (most recent call last):
...
ValueError: 'base' must be between 2 and 36 inclusive
"""
if base < 2 or base > 36:
raise ValueError("'base' must be between 2 and 36 inclusive")
num_str = int_to_base(num, base)
res = sum(int(char, base) for char in num_str)
res_str = int_to_base(res, base)
return res_str
def harshad_numbers_in_base(limit: int, base: int) -> list[str]:
"""
Finds all Harshad numbers smaller than num in base 'base'.
Where 'base' ranges from 2 to 36.
Examples:
>>> harshad_numbers_in_base(15, 2)
['1', '10', '100', '110', '1000', '1010', '1100']
>>> harshad_numbers_in_base(12, 34)
['1', '2', '3', '4', '5', '6', '7', '8', '9', 'A', 'B']
>>> harshad_numbers_in_base(12, 4)
['1', '2', '3', '10', '12', '20', '21']
>>> # bases below 2 and beyond 36 will error
>>> harshad_numbers_in_base(234, 37)
Traceback (most recent call last):
...
ValueError: 'base' must be between 2 and 36 inclusive
>>> harshad_numbers_in_base(234, 1)
Traceback (most recent call last):
...
ValueError: 'base' must be between 2 and 36 inclusive
"""
if base < 2 or base > 36:
raise ValueError("'base' must be between 2 and 36 inclusive")
if limit < 0:
return []
numbers = [
int_to_base(i, base)
for i in range(1, limit)
if i % int(sum_of_digits(i, base), base) == 0
]
return numbers
def is_harshad_number_in_base(num: int, base: int) -> bool:
"""
Determines whether n in base 'base' is a harshad number.
Where 'base' ranges from 2 to 36.
Examples:
>>> is_harshad_number_in_base(18, 10)
True
>>> is_harshad_number_in_base(21, 10)
True
>>> is_harshad_number_in_base(-21, 5)
False
>>> # bases below 2 and beyond 36 will error
>>> is_harshad_number_in_base(45, 37)
Traceback (most recent call last):
...
ValueError: 'base' must be between 2 and 36 inclusive
>>> is_harshad_number_in_base(45, 1)
Traceback (most recent call last):
...
ValueError: 'base' must be between 2 and 36 inclusive
"""
if base < 2 or base > 36:
raise ValueError("'base' must be between 2 and 36 inclusive")
if num < 0:
return False
n = int_to_base(num, base)
d = sum_of_digits(num, base)
return int(n, base) % int(d, base) == 0
if __name__ == "__main__":
import doctest
doctest.testmod()
| -1 |
TheAlgorithms/Python | 9,576 | Upgrade to Python 3.12 | ### Describe your change:
Repeats #8777
Repeats #9306
* #8777
* #9306
**Disables** algorithms based on `qiskit` and `tensorflow` because those modules are not yet compatible with Python 3.12.
* https://github.com/tensorflow/tensorflow/releases
* https://github.com/Qiskit/qiskit/issues/10887
---
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
| cclauss | "2023-10-03T08:13:43Z" | "2023-10-03T09:17:11Z" | f964dcbf2ff7c70e4aca20532a38dfb02ce8a4c0 | 0f4e51245f33175b4fb311f633d3821210741bdd | Upgrade to Python 3.12. ### Describe your change:
Repeats #8777
Repeats #9306
* #8777
* #9306
**Disables** algorithms based on `qiskit` and `tensorflow` because those modules are not yet compatible with Python 3.12.
* https://github.com/tensorflow/tensorflow/releases
* https://github.com/Qiskit/qiskit/issues/10887
---
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
| -1 |
||
TheAlgorithms/Python | 9,576 | Upgrade to Python 3.12 | ### Describe your change:
Repeats #8777
Repeats #9306
* #8777
* #9306
**Disables** algorithms based on `qiskit` and `tensorflow` because those modules are not yet compatible with Python 3.12.
* https://github.com/tensorflow/tensorflow/releases
* https://github.com/Qiskit/qiskit/issues/10887
---
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
| cclauss | "2023-10-03T08:13:43Z" | "2023-10-03T09:17:11Z" | f964dcbf2ff7c70e4aca20532a38dfb02ce8a4c0 | 0f4e51245f33175b4fb311f633d3821210741bdd | Upgrade to Python 3.12. ### Describe your change:
Repeats #8777
Repeats #9306
* #8777
* #9306
**Disables** algorithms based on `qiskit` and `tensorflow` because those modules are not yet compatible with Python 3.12.
* https://github.com/tensorflow/tensorflow/releases
* https://github.com/Qiskit/qiskit/issues/10887
---
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
| """
Reverse Polish Nation is also known as Polish postfix notation or simply postfix
notation.
https://en.wikipedia.org/wiki/Reverse_Polish_notation
Classic examples of simple stack implementations.
Valid operators are +, -, *, /.
Each operand may be an integer or another expression.
Output:
Enter a Postfix Equation (space separated) = 5 6 9 * +
Symbol | Action | Stack
-----------------------------------
5 | push(5) | 5
6 | push(6) | 5,6
9 | push(9) | 5,6,9
| pop(9) | 5,6
| pop(6) | 5
* | push(6*9) | 5,54
| pop(54) | 5
| pop(5) |
+ | push(5+54) | 59
Result = 59
"""
# Defining valid unary operator symbols
UNARY_OP_SYMBOLS = ("-", "+")
# operators & their respective operation
OPERATORS = {
"^": lambda p, q: p**q,
"*": lambda p, q: p * q,
"/": lambda p, q: p / q,
"+": lambda p, q: p + q,
"-": lambda p, q: p - q,
}
def parse_token(token: str | float) -> float | str:
"""
Converts the given data to the appropriate number if it is indeed a number, else
returns the data as it is with a False flag. This function also serves as a check
of whether the input is a number or not.
Parameters
----------
token: The data that needs to be converted to the appropriate operator or number.
Returns
-------
float or str
Returns a float if `token` is a number or a str if `token` is an operator
"""
if token in OPERATORS:
return token
try:
return float(token)
except ValueError:
msg = f"{token} is neither a number nor a valid operator"
raise ValueError(msg)
def evaluate(post_fix: list[str], verbose: bool = False) -> float:
"""
Evaluate postfix expression using a stack.
>>> evaluate(["0"])
0.0
>>> evaluate(["-0"])
-0.0
>>> evaluate(["1"])
1.0
>>> evaluate(["-1"])
-1.0
>>> evaluate(["-1.1"])
-1.1
>>> evaluate(["2", "1", "+", "3", "*"])
9.0
>>> evaluate(["2", "1.9", "+", "3", "*"])
11.7
>>> evaluate(["2", "-1.9", "+", "3", "*"])
0.30000000000000027
>>> evaluate(["4", "13", "5", "/", "+"])
6.6
>>> evaluate(["2", "-", "3", "+"])
1.0
>>> evaluate(["-4", "5", "*", "6", "-"])
-26.0
>>> evaluate([])
0
>>> evaluate(["4", "-", "6", "7", "/", "9", "8"])
Traceback (most recent call last):
...
ArithmeticError: Input is not a valid postfix expression
Parameters
----------
post_fix:
The postfix expression is tokenized into operators and operands and stored
as a Python list
verbose:
Display stack contents while evaluating the expression if verbose is True
Returns
-------
float
The evaluated value
"""
if not post_fix:
return 0
# Checking the list to find out whether the postfix expression is valid
valid_expression = [parse_token(token) for token in post_fix]
if verbose:
# print table header
print("Symbol".center(8), "Action".center(12), "Stack", sep=" | ")
print("-" * (30 + len(post_fix)))
stack = []
for x in valid_expression:
if x not in OPERATORS:
stack.append(x) # append x to stack
if verbose:
# output in tabular format
print(
f"{x}".rjust(8),
f"push({x})".ljust(12),
stack,
sep=" | ",
)
continue
# If x is operator
# If only 1 value is inside the stack and + or - is encountered
# then this is unary + or - case
if x in UNARY_OP_SYMBOLS and len(stack) < 2:
b = stack.pop() # pop stack
if x == "-":
b *= -1 # negate b
stack.append(b)
if verbose:
# output in tabular format
print(
"".rjust(8),
f"pop({b})".ljust(12),
stack,
sep=" | ",
)
print(
str(x).rjust(8),
f"push({x}{b})".ljust(12),
stack,
sep=" | ",
)
continue
b = stack.pop() # pop stack
if verbose:
# output in tabular format
print(
"".rjust(8),
f"pop({b})".ljust(12),
stack,
sep=" | ",
)
a = stack.pop() # pop stack
if verbose:
# output in tabular format
print(
"".rjust(8),
f"pop({a})".ljust(12),
stack,
sep=" | ",
)
# evaluate the 2 values popped from stack & push result to stack
stack.append(OPERATORS[x](a, b)) # type: ignore[index]
if verbose:
# output in tabular format
print(
f"{x}".rjust(8),
f"push({a}{x}{b})".ljust(12),
stack,
sep=" | ",
)
# If everything is executed correctly, the stack will contain
# only one element which is the result
if len(stack) != 1:
raise ArithmeticError("Input is not a valid postfix expression")
return float(stack[0])
if __name__ == "__main__":
# Create a loop so that the user can evaluate postfix expressions multiple times
while True:
expression = input("Enter a Postfix Expression (space separated): ").split(" ")
prompt = "Do you want to see stack contents while evaluating? [y/N]: "
verbose = input(prompt).strip().lower() == "y"
output = evaluate(expression, verbose)
print("Result = ", output)
prompt = "Do you want to enter another expression? [y/N]: "
if input(prompt).strip().lower() != "y":
break
| """
Reverse Polish Nation is also known as Polish postfix notation or simply postfix
notation.
https://en.wikipedia.org/wiki/Reverse_Polish_notation
Classic examples of simple stack implementations.
Valid operators are +, -, *, /.
Each operand may be an integer or another expression.
Output:
Enter a Postfix Equation (space separated) = 5 6 9 * +
Symbol | Action | Stack
-----------------------------------
5 | push(5) | 5
6 | push(6) | 5,6
9 | push(9) | 5,6,9
| pop(9) | 5,6
| pop(6) | 5
* | push(6*9) | 5,54
| pop(54) | 5
| pop(5) |
+ | push(5+54) | 59
Result = 59
"""
# Defining valid unary operator symbols
UNARY_OP_SYMBOLS = ("-", "+")
# operators & their respective operation
OPERATORS = {
"^": lambda p, q: p**q,
"*": lambda p, q: p * q,
"/": lambda p, q: p / q,
"+": lambda p, q: p + q,
"-": lambda p, q: p - q,
}
def parse_token(token: str | float) -> float | str:
"""
Converts the given data to the appropriate number if it is indeed a number, else
returns the data as it is with a False flag. This function also serves as a check
of whether the input is a number or not.
Parameters
----------
token: The data that needs to be converted to the appropriate operator or number.
Returns
-------
float or str
Returns a float if `token` is a number or a str if `token` is an operator
"""
if token in OPERATORS:
return token
try:
return float(token)
except ValueError:
msg = f"{token} is neither a number nor a valid operator"
raise ValueError(msg)
def evaluate(post_fix: list[str], verbose: bool = False) -> float:
"""
Evaluate postfix expression using a stack.
>>> evaluate(["0"])
0.0
>>> evaluate(["-0"])
-0.0
>>> evaluate(["1"])
1.0
>>> evaluate(["-1"])
-1.0
>>> evaluate(["-1.1"])
-1.1
>>> evaluate(["2", "1", "+", "3", "*"])
9.0
>>> evaluate(["2", "1.9", "+", "3", "*"])
11.7
>>> evaluate(["2", "-1.9", "+", "3", "*"])
0.30000000000000027
>>> evaluate(["4", "13", "5", "/", "+"])
6.6
>>> evaluate(["2", "-", "3", "+"])
1.0
>>> evaluate(["-4", "5", "*", "6", "-"])
-26.0
>>> evaluate([])
0
>>> evaluate(["4", "-", "6", "7", "/", "9", "8"])
Traceback (most recent call last):
...
ArithmeticError: Input is not a valid postfix expression
Parameters
----------
post_fix:
The postfix expression is tokenized into operators and operands and stored
as a Python list
verbose:
Display stack contents while evaluating the expression if verbose is True
Returns
-------
float
The evaluated value
"""
if not post_fix:
return 0
# Checking the list to find out whether the postfix expression is valid
valid_expression = [parse_token(token) for token in post_fix]
if verbose:
# print table header
print("Symbol".center(8), "Action".center(12), "Stack", sep=" | ")
print("-" * (30 + len(post_fix)))
stack = []
for x in valid_expression:
if x not in OPERATORS:
stack.append(x) # append x to stack
if verbose:
# output in tabular format
print(
f"{x}".rjust(8),
f"push({x})".ljust(12),
stack,
sep=" | ",
)
continue
# If x is operator
# If only 1 value is inside the stack and + or - is encountered
# then this is unary + or - case
if x in UNARY_OP_SYMBOLS and len(stack) < 2:
b = stack.pop() # pop stack
if x == "-":
b *= -1 # negate b
stack.append(b)
if verbose:
# output in tabular format
print(
"".rjust(8),
f"pop({b})".ljust(12),
stack,
sep=" | ",
)
print(
str(x).rjust(8),
f"push({x}{b})".ljust(12),
stack,
sep=" | ",
)
continue
b = stack.pop() # pop stack
if verbose:
# output in tabular format
print(
"".rjust(8),
f"pop({b})".ljust(12),
stack,
sep=" | ",
)
a = stack.pop() # pop stack
if verbose:
# output in tabular format
print(
"".rjust(8),
f"pop({a})".ljust(12),
stack,
sep=" | ",
)
# evaluate the 2 values popped from stack & push result to stack
stack.append(OPERATORS[x](a, b)) # type: ignore[index]
if verbose:
# output in tabular format
print(
f"{x}".rjust(8),
f"push({a}{x}{b})".ljust(12),
stack,
sep=" | ",
)
# If everything is executed correctly, the stack will contain
# only one element which is the result
if len(stack) != 1:
raise ArithmeticError("Input is not a valid postfix expression")
return float(stack[0])
if __name__ == "__main__":
# Create a loop so that the user can evaluate postfix expressions multiple times
while True:
expression = input("Enter a Postfix Expression (space separated): ").split(" ")
prompt = "Do you want to see stack contents while evaluating? [y/N]: "
verbose = input(prompt).strip().lower() == "y"
output = evaluate(expression, verbose)
print("Result = ", output)
prompt = "Do you want to enter another expression? [y/N]: "
if input(prompt).strip().lower() != "y":
break
| -1 |
TheAlgorithms/Python | 9,576 | Upgrade to Python 3.12 | ### Describe your change:
Repeats #8777
Repeats #9306
* #8777
* #9306
**Disables** algorithms based on `qiskit` and `tensorflow` because those modules are not yet compatible with Python 3.12.
* https://github.com/tensorflow/tensorflow/releases
* https://github.com/Qiskit/qiskit/issues/10887
---
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
| cclauss | "2023-10-03T08:13:43Z" | "2023-10-03T09:17:11Z" | f964dcbf2ff7c70e4aca20532a38dfb02ce8a4c0 | 0f4e51245f33175b4fb311f633d3821210741bdd | Upgrade to Python 3.12. ### Describe your change:
Repeats #8777
Repeats #9306
* #8777
* #9306
**Disables** algorithms based on `qiskit` and `tensorflow` because those modules are not yet compatible with Python 3.12.
* https://github.com/tensorflow/tensorflow/releases
* https://github.com/Qiskit/qiskit/issues/10887
---
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
| -1 |
||
TheAlgorithms/Python | 9,576 | Upgrade to Python 3.12 | ### Describe your change:
Repeats #8777
Repeats #9306
* #8777
* #9306
**Disables** algorithms based on `qiskit` and `tensorflow` because those modules are not yet compatible with Python 3.12.
* https://github.com/tensorflow/tensorflow/releases
* https://github.com/Qiskit/qiskit/issues/10887
---
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
| cclauss | "2023-10-03T08:13:43Z" | "2023-10-03T09:17:11Z" | f964dcbf2ff7c70e4aca20532a38dfb02ce8a4c0 | 0f4e51245f33175b4fb311f633d3821210741bdd | Upgrade to Python 3.12. ### Describe your change:
Repeats #8777
Repeats #9306
* #8777
* #9306
**Disables** algorithms based on `qiskit` and `tensorflow` because those modules are not yet compatible with Python 3.12.
* https://github.com/tensorflow/tensorflow/releases
* https://github.com/Qiskit/qiskit/issues/10887
---
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
| class BinaryHeap:
"""
A max-heap implementation in Python
>>> binary_heap = BinaryHeap()
>>> binary_heap.insert(6)
>>> binary_heap.insert(10)
>>> binary_heap.insert(15)
>>> binary_heap.insert(12)
>>> binary_heap.pop()
15
>>> binary_heap.pop()
12
>>> binary_heap.get_list
[10, 6]
>>> len(binary_heap)
2
"""
def __init__(self):
self.__heap = [0]
self.__size = 0
def __swap_up(self, i: int) -> None:
"""Swap the element up"""
temporary = self.__heap[i]
while i // 2 > 0:
if self.__heap[i] > self.__heap[i // 2]:
self.__heap[i] = self.__heap[i // 2]
self.__heap[i // 2] = temporary
i //= 2
def insert(self, value: int) -> None:
"""Insert new element"""
self.__heap.append(value)
self.__size += 1
self.__swap_up(self.__size)
def __swap_down(self, i: int) -> None:
"""Swap the element down"""
while self.__size >= 2 * i:
if 2 * i + 1 > self.__size:
bigger_child = 2 * i
else:
if self.__heap[2 * i] > self.__heap[2 * i + 1]:
bigger_child = 2 * i
else:
bigger_child = 2 * i + 1
temporary = self.__heap[i]
if self.__heap[i] < self.__heap[bigger_child]:
self.__heap[i] = self.__heap[bigger_child]
self.__heap[bigger_child] = temporary
i = bigger_child
def pop(self) -> int:
"""Pop the root element"""
max_value = self.__heap[1]
self.__heap[1] = self.__heap[self.__size]
self.__size -= 1
self.__heap.pop()
self.__swap_down(1)
return max_value
@property
def get_list(self):
return self.__heap[1:]
def __len__(self):
"""Length of the array"""
return self.__size
if __name__ == "__main__":
import doctest
doctest.testmod()
# create an instance of BinaryHeap
binary_heap = BinaryHeap()
binary_heap.insert(6)
binary_heap.insert(10)
binary_heap.insert(15)
binary_heap.insert(12)
# pop root(max-values because it is max heap)
print(binary_heap.pop()) # 15
print(binary_heap.pop()) # 12
# get the list and size after operations
print(binary_heap.get_list)
print(len(binary_heap))
| class BinaryHeap:
"""
A max-heap implementation in Python
>>> binary_heap = BinaryHeap()
>>> binary_heap.insert(6)
>>> binary_heap.insert(10)
>>> binary_heap.insert(15)
>>> binary_heap.insert(12)
>>> binary_heap.pop()
15
>>> binary_heap.pop()
12
>>> binary_heap.get_list
[10, 6]
>>> len(binary_heap)
2
"""
def __init__(self):
self.__heap = [0]
self.__size = 0
def __swap_up(self, i: int) -> None:
"""Swap the element up"""
temporary = self.__heap[i]
while i // 2 > 0:
if self.__heap[i] > self.__heap[i // 2]:
self.__heap[i] = self.__heap[i // 2]
self.__heap[i // 2] = temporary
i //= 2
def insert(self, value: int) -> None:
"""Insert new element"""
self.__heap.append(value)
self.__size += 1
self.__swap_up(self.__size)
def __swap_down(self, i: int) -> None:
"""Swap the element down"""
while self.__size >= 2 * i:
if 2 * i + 1 > self.__size:
bigger_child = 2 * i
else:
if self.__heap[2 * i] > self.__heap[2 * i + 1]:
bigger_child = 2 * i
else:
bigger_child = 2 * i + 1
temporary = self.__heap[i]
if self.__heap[i] < self.__heap[bigger_child]:
self.__heap[i] = self.__heap[bigger_child]
self.__heap[bigger_child] = temporary
i = bigger_child
def pop(self) -> int:
"""Pop the root element"""
max_value = self.__heap[1]
self.__heap[1] = self.__heap[self.__size]
self.__size -= 1
self.__heap.pop()
self.__swap_down(1)
return max_value
@property
def get_list(self):
return self.__heap[1:]
def __len__(self):
"""Length of the array"""
return self.__size
if __name__ == "__main__":
import doctest
doctest.testmod()
# create an instance of BinaryHeap
binary_heap = BinaryHeap()
binary_heap.insert(6)
binary_heap.insert(10)
binary_heap.insert(15)
binary_heap.insert(12)
# pop root(max-values because it is max heap)
print(binary_heap.pop()) # 15
print(binary_heap.pop()) # 12
# get the list and size after operations
print(binary_heap.get_list)
print(len(binary_heap))
| -1 |
TheAlgorithms/Python | 9,576 | Upgrade to Python 3.12 | ### Describe your change:
Repeats #8777
Repeats #9306
* #8777
* #9306
**Disables** algorithms based on `qiskit` and `tensorflow` because those modules are not yet compatible with Python 3.12.
* https://github.com/tensorflow/tensorflow/releases
* https://github.com/Qiskit/qiskit/issues/10887
---
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
| cclauss | "2023-10-03T08:13:43Z" | "2023-10-03T09:17:11Z" | f964dcbf2ff7c70e4aca20532a38dfb02ce8a4c0 | 0f4e51245f33175b4fb311f633d3821210741bdd | Upgrade to Python 3.12. ### Describe your change:
Repeats #8777
Repeats #9306
* #8777
* #9306
**Disables** algorithms based on `qiskit` and `tensorflow` because those modules are not yet compatible with Python 3.12.
* https://github.com/tensorflow/tensorflow/releases
* https://github.com/Qiskit/qiskit/issues/10887
---
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
| import matplotlib.gridspec as gridspec
import matplotlib.pyplot as plt
import numpy as np
from sklearn.utils import shuffle
import input_data
random_numer = 42
np.random.seed(random_numer)
def ReLu(x):
mask = (x > 0) * 1.0
return mask * x
def d_ReLu(x):
mask = (x > 0) * 1.0
return mask
def arctan(x):
return np.arctan(x)
def d_arctan(x):
return 1 / (1 + x ** 2)
def log(x):
return 1 / (1 + np.exp(-1 * x))
def d_log(x):
return log(x) * (1 - log(x))
def tanh(x):
return np.tanh(x)
def d_tanh(x):
return 1 - np.tanh(x) ** 2
def plot(samples):
fig = plt.figure(figsize=(4, 4))
gs = gridspec.GridSpec(4, 4)
gs.update(wspace=0.05, hspace=0.05)
for i, sample in enumerate(samples):
ax = plt.subplot(gs[i])
plt.axis("off")
ax.set_xticklabels([])
ax.set_yticklabels([])
ax.set_aspect("equal")
plt.imshow(sample.reshape(28, 28), cmap="Greys_r")
return fig
if __name__ == "__main__":
# 1. Load Data and declare hyper
print("--------- Load Data ----------")
mnist = input_data.read_data_sets("MNIST_data", one_hot=False)
temp = mnist.test
images, labels = temp.images, temp.labels
images, labels = shuffle(np.asarray(images), np.asarray(labels))
num_epoch = 10
learing_rate = 0.00009
G_input = 100
hidden_input, hidden_input2, hidden_input3 = 128, 256, 346
hidden_input4, hidden_input5, hidden_input6 = 480, 560, 686
print("--------- Declare Hyper Parameters ----------")
# 2. Declare Weights
D_W1 = (
np.random.normal(size=(784, hidden_input), scale=(1.0 / np.sqrt(784 / 2.0)))
* 0.002
)
# D_b1 = np.random.normal(size=(128),scale=(1. / np.sqrt(128 / 2.))) *0.002
D_b1 = np.zeros(hidden_input)
D_W2 = (
np.random.normal(
size=(hidden_input, 1), scale=(1.0 / np.sqrt(hidden_input / 2.0))
)
* 0.002
)
# D_b2 = np.random.normal(size=(1),scale=(1. / np.sqrt(1 / 2.))) *0.002
D_b2 = np.zeros(1)
G_W1 = (
np.random.normal(
size=(G_input, hidden_input), scale=(1.0 / np.sqrt(G_input / 2.0))
)
* 0.002
)
# G_b1 = np.random.normal(size=(128),scale=(1. / np.sqrt(128 / 2.))) *0.002
G_b1 = np.zeros(hidden_input)
G_W2 = (
np.random.normal(
size=(hidden_input, hidden_input2),
scale=(1.0 / np.sqrt(hidden_input / 2.0)),
)
* 0.002
)
# G_b1 = np.random.normal(size=(128),scale=(1. / np.sqrt(128 / 2.))) *0.002
G_b2 = np.zeros(hidden_input2)
G_W3 = (
np.random.normal(
size=(hidden_input2, hidden_input3),
scale=(1.0 / np.sqrt(hidden_input2 / 2.0)),
)
* 0.002
)
# G_b1 = np.random.normal(size=(128),scale=(1. / np.sqrt(128 / 2.))) *0.002
G_b3 = np.zeros(hidden_input3)
G_W4 = (
np.random.normal(
size=(hidden_input3, hidden_input4),
scale=(1.0 / np.sqrt(hidden_input3 / 2.0)),
)
* 0.002
)
# G_b1 = np.random.normal(size=(128),scale=(1. / np.sqrt(128 / 2.))) *0.002
G_b4 = np.zeros(hidden_input4)
G_W5 = (
np.random.normal(
size=(hidden_input4, hidden_input5),
scale=(1.0 / np.sqrt(hidden_input4 / 2.0)),
)
* 0.002
)
# G_b1 = np.random.normal(size=(128),scale=(1. / np.sqrt(128 / 2.))) *0.002
G_b5 = np.zeros(hidden_input5)
G_W6 = (
np.random.normal(
size=(hidden_input5, hidden_input6),
scale=(1.0 / np.sqrt(hidden_input5 / 2.0)),
)
* 0.002
)
# G_b1 = np.random.normal(size=(128),scale=(1. / np.sqrt(128 / 2.))) *0.002
G_b6 = np.zeros(hidden_input6)
G_W7 = (
np.random.normal(
size=(hidden_input6, 784), scale=(1.0 / np.sqrt(hidden_input6 / 2.0))
)
* 0.002
)
# G_b2 = np.random.normal(size=(784),scale=(1. / np.sqrt(784 / 2.))) *0.002
G_b7 = np.zeros(784)
# 3. For Adam Optimzier
v1, m1 = 0, 0
v2, m2 = 0, 0
v3, m3 = 0, 0
v4, m4 = 0, 0
v5, m5 = 0, 0
v6, m6 = 0, 0
v7, m7 = 0, 0
v8, m8 = 0, 0
v9, m9 = 0, 0
v10, m10 = 0, 0
v11, m11 = 0, 0
v12, m12 = 0, 0
v13, m13 = 0, 0
v14, m14 = 0, 0
v15, m15 = 0, 0
v16, m16 = 0, 0
v17, m17 = 0, 0
v18, m18 = 0, 0
beta_1, beta_2, eps = 0.9, 0.999, 0.00000001
print("--------- Started Training ----------")
for iter in range(num_epoch):
random_int = np.random.randint(len(images) - 5)
current_image = np.expand_dims(images[random_int], axis=0)
# Func: Generate The first Fake Data
Z = np.random.uniform(-1.0, 1.0, size=[1, G_input])
Gl1 = Z.dot(G_W1) + G_b1
Gl1A = arctan(Gl1)
Gl2 = Gl1A.dot(G_W2) + G_b2
Gl2A = ReLu(Gl2)
Gl3 = Gl2A.dot(G_W3) + G_b3
Gl3A = arctan(Gl3)
Gl4 = Gl3A.dot(G_W4) + G_b4
Gl4A = ReLu(Gl4)
Gl5 = Gl4A.dot(G_W5) + G_b5
Gl5A = tanh(Gl5)
Gl6 = Gl5A.dot(G_W6) + G_b6
Gl6A = ReLu(Gl6)
Gl7 = Gl6A.dot(G_W7) + G_b7
current_fake_data = log(Gl7)
# Func: Forward Feed for Real data
Dl1_r = current_image.dot(D_W1) + D_b1
Dl1_rA = ReLu(Dl1_r)
Dl2_r = Dl1_rA.dot(D_W2) + D_b2
Dl2_rA = log(Dl2_r)
# Func: Forward Feed for Fake Data
Dl1_f = current_fake_data.dot(D_W1) + D_b1
Dl1_fA = ReLu(Dl1_f)
Dl2_f = Dl1_fA.dot(D_W2) + D_b2
Dl2_fA = log(Dl2_f)
# Func: Cost D
D_cost = -np.log(Dl2_rA) + np.log(1.0 - Dl2_fA)
# Func: Gradient
grad_f_w2_part_1 = 1 / (1.0 - Dl2_fA)
grad_f_w2_part_2 = d_log(Dl2_f)
grad_f_w2_part_3 = Dl1_fA
grad_f_w2 = grad_f_w2_part_3.T.dot(grad_f_w2_part_1 * grad_f_w2_part_2)
grad_f_b2 = grad_f_w2_part_1 * grad_f_w2_part_2
grad_f_w1_part_1 = (grad_f_w2_part_1 * grad_f_w2_part_2).dot(D_W2.T)
grad_f_w1_part_2 = d_ReLu(Dl1_f)
grad_f_w1_part_3 = current_fake_data
grad_f_w1 = grad_f_w1_part_3.T.dot(grad_f_w1_part_1 * grad_f_w1_part_2)
grad_f_b1 = grad_f_w1_part_1 * grad_f_w1_part_2
grad_r_w2_part_1 = -1 / Dl2_rA
grad_r_w2_part_2 = d_log(Dl2_r)
grad_r_w2_part_3 = Dl1_rA
grad_r_w2 = grad_r_w2_part_3.T.dot(grad_r_w2_part_1 * grad_r_w2_part_2)
grad_r_b2 = grad_r_w2_part_1 * grad_r_w2_part_2
grad_r_w1_part_1 = (grad_r_w2_part_1 * grad_r_w2_part_2).dot(D_W2.T)
grad_r_w1_part_2 = d_ReLu(Dl1_r)
grad_r_w1_part_3 = current_image
grad_r_w1 = grad_r_w1_part_3.T.dot(grad_r_w1_part_1 * grad_r_w1_part_2)
grad_r_b1 = grad_r_w1_part_1 * grad_r_w1_part_2
grad_w1 = grad_f_w1 + grad_r_w1
grad_b1 = grad_f_b1 + grad_r_b1
grad_w2 = grad_f_w2 + grad_r_w2
grad_b2 = grad_f_b2 + grad_r_b2
# ---- Update Gradient ----
m1 = beta_1 * m1 + (1 - beta_1) * grad_w1
v1 = beta_2 * v1 + (1 - beta_2) * grad_w1 ** 2
m2 = beta_1 * m2 + (1 - beta_1) * grad_b1
v2 = beta_2 * v2 + (1 - beta_2) * grad_b1 ** 2
m3 = beta_1 * m3 + (1 - beta_1) * grad_w2
v3 = beta_2 * v3 + (1 - beta_2) * grad_w2 ** 2
m4 = beta_1 * m4 + (1 - beta_1) * grad_b2
v4 = beta_2 * v4 + (1 - beta_2) * grad_b2 ** 2
D_W1 = D_W1 - (learing_rate / (np.sqrt(v1 / (1 - beta_2)) + eps)) * (
m1 / (1 - beta_1)
)
D_b1 = D_b1 - (learing_rate / (np.sqrt(v2 / (1 - beta_2)) + eps)) * (
m2 / (1 - beta_1)
)
D_W2 = D_W2 - (learing_rate / (np.sqrt(v3 / (1 - beta_2)) + eps)) * (
m3 / (1 - beta_1)
)
D_b2 = D_b2 - (learing_rate / (np.sqrt(v4 / (1 - beta_2)) + eps)) * (
m4 / (1 - beta_1)
)
# Func: Forward Feed for G
Z = np.random.uniform(-1.0, 1.0, size=[1, G_input])
Gl1 = Z.dot(G_W1) + G_b1
Gl1A = arctan(Gl1)
Gl2 = Gl1A.dot(G_W2) + G_b2
Gl2A = ReLu(Gl2)
Gl3 = Gl2A.dot(G_W3) + G_b3
Gl3A = arctan(Gl3)
Gl4 = Gl3A.dot(G_W4) + G_b4
Gl4A = ReLu(Gl4)
Gl5 = Gl4A.dot(G_W5) + G_b5
Gl5A = tanh(Gl5)
Gl6 = Gl5A.dot(G_W6) + G_b6
Gl6A = ReLu(Gl6)
Gl7 = Gl6A.dot(G_W7) + G_b7
current_fake_data = log(Gl7)
Dl1 = current_fake_data.dot(D_W1) + D_b1
Dl1_A = ReLu(Dl1)
Dl2 = Dl1_A.dot(D_W2) + D_b2
Dl2_A = log(Dl2)
# Func: Cost G
G_cost = -np.log(Dl2_A)
# Func: Gradient
grad_G_w7_part_1 = ((-1 / Dl2_A) * d_log(Dl2).dot(D_W2.T) * (d_ReLu(Dl1))).dot(
D_W1.T
)
grad_G_w7_part_2 = d_log(Gl7)
grad_G_w7_part_3 = Gl6A
grad_G_w7 = grad_G_w7_part_3.T.dot(grad_G_w7_part_1 * grad_G_w7_part_1)
grad_G_b7 = grad_G_w7_part_1 * grad_G_w7_part_2
grad_G_w6_part_1 = (grad_G_w7_part_1 * grad_G_w7_part_2).dot(G_W7.T)
grad_G_w6_part_2 = d_ReLu(Gl6)
grad_G_w6_part_3 = Gl5A
grad_G_w6 = grad_G_w6_part_3.T.dot(grad_G_w6_part_1 * grad_G_w6_part_2)
grad_G_b6 = grad_G_w6_part_1 * grad_G_w6_part_2
grad_G_w5_part_1 = (grad_G_w6_part_1 * grad_G_w6_part_2).dot(G_W6.T)
grad_G_w5_part_2 = d_tanh(Gl5)
grad_G_w5_part_3 = Gl4A
grad_G_w5 = grad_G_w5_part_3.T.dot(grad_G_w5_part_1 * grad_G_w5_part_2)
grad_G_b5 = grad_G_w5_part_1 * grad_G_w5_part_2
grad_G_w4_part_1 = (grad_G_w5_part_1 * grad_G_w5_part_2).dot(G_W5.T)
grad_G_w4_part_2 = d_ReLu(Gl4)
grad_G_w4_part_3 = Gl3A
grad_G_w4 = grad_G_w4_part_3.T.dot(grad_G_w4_part_1 * grad_G_w4_part_2)
grad_G_b4 = grad_G_w4_part_1 * grad_G_w4_part_2
grad_G_w3_part_1 = (grad_G_w4_part_1 * grad_G_w4_part_2).dot(G_W4.T)
grad_G_w3_part_2 = d_arctan(Gl3)
grad_G_w3_part_3 = Gl2A
grad_G_w3 = grad_G_w3_part_3.T.dot(grad_G_w3_part_1 * grad_G_w3_part_2)
grad_G_b3 = grad_G_w3_part_1 * grad_G_w3_part_2
grad_G_w2_part_1 = (grad_G_w3_part_1 * grad_G_w3_part_2).dot(G_W3.T)
grad_G_w2_part_2 = d_ReLu(Gl2)
grad_G_w2_part_3 = Gl1A
grad_G_w2 = grad_G_w2_part_3.T.dot(grad_G_w2_part_1 * grad_G_w2_part_2)
grad_G_b2 = grad_G_w2_part_1 * grad_G_w2_part_2
grad_G_w1_part_1 = (grad_G_w2_part_1 * grad_G_w2_part_2).dot(G_W2.T)
grad_G_w1_part_2 = d_arctan(Gl1)
grad_G_w1_part_3 = Z
grad_G_w1 = grad_G_w1_part_3.T.dot(grad_G_w1_part_1 * grad_G_w1_part_2)
grad_G_b1 = grad_G_w1_part_1 * grad_G_w1_part_2
# ---- Update Gradient ----
m5 = beta_1 * m5 + (1 - beta_1) * grad_G_w1
v5 = beta_2 * v5 + (1 - beta_2) * grad_G_w1 ** 2
m6 = beta_1 * m6 + (1 - beta_1) * grad_G_b1
v6 = beta_2 * v6 + (1 - beta_2) * grad_G_b1 ** 2
m7 = beta_1 * m7 + (1 - beta_1) * grad_G_w2
v7 = beta_2 * v7 + (1 - beta_2) * grad_G_w2 ** 2
m8 = beta_1 * m8 + (1 - beta_1) * grad_G_b2
v8 = beta_2 * v8 + (1 - beta_2) * grad_G_b2 ** 2
m9 = beta_1 * m9 + (1 - beta_1) * grad_G_w3
v9 = beta_2 * v9 + (1 - beta_2) * grad_G_w3 ** 2
m10 = beta_1 * m10 + (1 - beta_1) * grad_G_b3
v10 = beta_2 * v10 + (1 - beta_2) * grad_G_b3 ** 2
m11 = beta_1 * m11 + (1 - beta_1) * grad_G_w4
v11 = beta_2 * v11 + (1 - beta_2) * grad_G_w4 ** 2
m12 = beta_1 * m12 + (1 - beta_1) * grad_G_b4
v12 = beta_2 * v12 + (1 - beta_2) * grad_G_b4 ** 2
m13 = beta_1 * m13 + (1 - beta_1) * grad_G_w5
v13 = beta_2 * v13 + (1 - beta_2) * grad_G_w5 ** 2
m14 = beta_1 * m14 + (1 - beta_1) * grad_G_b5
v14 = beta_2 * v14 + (1 - beta_2) * grad_G_b5 ** 2
m15 = beta_1 * m15 + (1 - beta_1) * grad_G_w6
v15 = beta_2 * v15 + (1 - beta_2) * grad_G_w6 ** 2
m16 = beta_1 * m16 + (1 - beta_1) * grad_G_b6
v16 = beta_2 * v16 + (1 - beta_2) * grad_G_b6 ** 2
m17 = beta_1 * m17 + (1 - beta_1) * grad_G_w7
v17 = beta_2 * v17 + (1 - beta_2) * grad_G_w7 ** 2
m18 = beta_1 * m18 + (1 - beta_1) * grad_G_b7
v18 = beta_2 * v18 + (1 - beta_2) * grad_G_b7 ** 2
G_W1 = G_W1 - (learing_rate / (np.sqrt(v5 / (1 - beta_2)) + eps)) * (
m5 / (1 - beta_1)
)
G_b1 = G_b1 - (learing_rate / (np.sqrt(v6 / (1 - beta_2)) + eps)) * (
m6 / (1 - beta_1)
)
G_W2 = G_W2 - (learing_rate / (np.sqrt(v7 / (1 - beta_2)) + eps)) * (
m7 / (1 - beta_1)
)
G_b2 = G_b2 - (learing_rate / (np.sqrt(v8 / (1 - beta_2)) + eps)) * (
m8 / (1 - beta_1)
)
G_W3 = G_W3 - (learing_rate / (np.sqrt(v9 / (1 - beta_2)) + eps)) * (
m9 / (1 - beta_1)
)
G_b3 = G_b3 - (learing_rate / (np.sqrt(v10 / (1 - beta_2)) + eps)) * (
m10 / (1 - beta_1)
)
G_W4 = G_W4 - (learing_rate / (np.sqrt(v11 / (1 - beta_2)) + eps)) * (
m11 / (1 - beta_1)
)
G_b4 = G_b4 - (learing_rate / (np.sqrt(v12 / (1 - beta_2)) + eps)) * (
m12 / (1 - beta_1)
)
G_W5 = G_W5 - (learing_rate / (np.sqrt(v13 / (1 - beta_2)) + eps)) * (
m13 / (1 - beta_1)
)
G_b5 = G_b5 - (learing_rate / (np.sqrt(v14 / (1 - beta_2)) + eps)) * (
m14 / (1 - beta_1)
)
G_W6 = G_W6 - (learing_rate / (np.sqrt(v15 / (1 - beta_2)) + eps)) * (
m15 / (1 - beta_1)
)
G_b6 = G_b6 - (learing_rate / (np.sqrt(v16 / (1 - beta_2)) + eps)) * (
m16 / (1 - beta_1)
)
G_W7 = G_W7 - (learing_rate / (np.sqrt(v17 / (1 - beta_2)) + eps)) * (
m17 / (1 - beta_1)
)
G_b7 = G_b7 - (learing_rate / (np.sqrt(v18 / (1 - beta_2)) + eps)) * (
m18 / (1 - beta_1)
)
# --- Print Error ----
# print("Current Iter: ",iter, " Current D cost:",D_cost, " Current G cost: ", G_cost,end='\r')
if iter == 0:
learing_rate = learing_rate * 0.01
if iter == 40:
learing_rate = learing_rate * 0.01
# ---- Print to Out put ----
if iter % 10 == 0:
print(
"Current Iter: ",
iter,
" Current D cost:",
D_cost,
" Current G cost: ",
G_cost,
end="\r",
)
print("--------- Show Example Result See Tab Above ----------")
print("--------- Wait for the image to load ---------")
Z = np.random.uniform(-1.0, 1.0, size=[16, G_input])
Gl1 = Z.dot(G_W1) + G_b1
Gl1A = arctan(Gl1)
Gl2 = Gl1A.dot(G_W2) + G_b2
Gl2A = ReLu(Gl2)
Gl3 = Gl2A.dot(G_W3) + G_b3
Gl3A = arctan(Gl3)
Gl4 = Gl3A.dot(G_W4) + G_b4
Gl4A = ReLu(Gl4)
Gl5 = Gl4A.dot(G_W5) + G_b5
Gl5A = tanh(Gl5)
Gl6 = Gl5A.dot(G_W6) + G_b6
Gl6A = ReLu(Gl6)
Gl7 = Gl6A.dot(G_W7) + G_b7
current_fake_data = log(Gl7)
fig = plot(current_fake_data)
fig.savefig(
"Click_Me_{}.png".format(
str(iter).zfill(3)
+ "_Ginput_"
+ str(G_input)
+ "_hiddenone"
+ str(hidden_input)
+ "_hiddentwo"
+ str(hidden_input2)
+ "_LR_"
+ str(learing_rate)
),
bbox_inches="tight",
)
# for complete explanation visit https://towardsdatascience.com/only-numpy-implementing-gan-general-adversarial-networks-and-adam-optimizer-using-numpy-with-2a7e4e032021
# -- end code --
| import matplotlib.gridspec as gridspec
import matplotlib.pyplot as plt
import numpy as np
from sklearn.utils import shuffle
import input_data
random_numer = 42
np.random.seed(random_numer)
def ReLu(x):
mask = (x > 0) * 1.0
return mask * x
def d_ReLu(x):
mask = (x > 0) * 1.0
return mask
def arctan(x):
return np.arctan(x)
def d_arctan(x):
return 1 / (1 + x ** 2)
def log(x):
return 1 / (1 + np.exp(-1 * x))
def d_log(x):
return log(x) * (1 - log(x))
def tanh(x):
return np.tanh(x)
def d_tanh(x):
return 1 - np.tanh(x) ** 2
def plot(samples):
fig = plt.figure(figsize=(4, 4))
gs = gridspec.GridSpec(4, 4)
gs.update(wspace=0.05, hspace=0.05)
for i, sample in enumerate(samples):
ax = plt.subplot(gs[i])
plt.axis("off")
ax.set_xticklabels([])
ax.set_yticklabels([])
ax.set_aspect("equal")
plt.imshow(sample.reshape(28, 28), cmap="Greys_r")
return fig
if __name__ == "__main__":
# 1. Load Data and declare hyper
print("--------- Load Data ----------")
mnist = input_data.read_data_sets("MNIST_data", one_hot=False)
temp = mnist.test
images, labels = temp.images, temp.labels
images, labels = shuffle(np.asarray(images), np.asarray(labels))
num_epoch = 10
learing_rate = 0.00009
G_input = 100
hidden_input, hidden_input2, hidden_input3 = 128, 256, 346
hidden_input4, hidden_input5, hidden_input6 = 480, 560, 686
print("--------- Declare Hyper Parameters ----------")
# 2. Declare Weights
D_W1 = (
np.random.normal(size=(784, hidden_input), scale=(1.0 / np.sqrt(784 / 2.0)))
* 0.002
)
# D_b1 = np.random.normal(size=(128),scale=(1. / np.sqrt(128 / 2.))) *0.002
D_b1 = np.zeros(hidden_input)
D_W2 = (
np.random.normal(
size=(hidden_input, 1), scale=(1.0 / np.sqrt(hidden_input / 2.0))
)
* 0.002
)
# D_b2 = np.random.normal(size=(1),scale=(1. / np.sqrt(1 / 2.))) *0.002
D_b2 = np.zeros(1)
G_W1 = (
np.random.normal(
size=(G_input, hidden_input), scale=(1.0 / np.sqrt(G_input / 2.0))
)
* 0.002
)
# G_b1 = np.random.normal(size=(128),scale=(1. / np.sqrt(128 / 2.))) *0.002
G_b1 = np.zeros(hidden_input)
G_W2 = (
np.random.normal(
size=(hidden_input, hidden_input2),
scale=(1.0 / np.sqrt(hidden_input / 2.0)),
)
* 0.002
)
# G_b1 = np.random.normal(size=(128),scale=(1. / np.sqrt(128 / 2.))) *0.002
G_b2 = np.zeros(hidden_input2)
G_W3 = (
np.random.normal(
size=(hidden_input2, hidden_input3),
scale=(1.0 / np.sqrt(hidden_input2 / 2.0)),
)
* 0.002
)
# G_b1 = np.random.normal(size=(128),scale=(1. / np.sqrt(128 / 2.))) *0.002
G_b3 = np.zeros(hidden_input3)
G_W4 = (
np.random.normal(
size=(hidden_input3, hidden_input4),
scale=(1.0 / np.sqrt(hidden_input3 / 2.0)),
)
* 0.002
)
# G_b1 = np.random.normal(size=(128),scale=(1. / np.sqrt(128 / 2.))) *0.002
G_b4 = np.zeros(hidden_input4)
G_W5 = (
np.random.normal(
size=(hidden_input4, hidden_input5),
scale=(1.0 / np.sqrt(hidden_input4 / 2.0)),
)
* 0.002
)
# G_b1 = np.random.normal(size=(128),scale=(1. / np.sqrt(128 / 2.))) *0.002
G_b5 = np.zeros(hidden_input5)
G_W6 = (
np.random.normal(
size=(hidden_input5, hidden_input6),
scale=(1.0 / np.sqrt(hidden_input5 / 2.0)),
)
* 0.002
)
# G_b1 = np.random.normal(size=(128),scale=(1. / np.sqrt(128 / 2.))) *0.002
G_b6 = np.zeros(hidden_input6)
G_W7 = (
np.random.normal(
size=(hidden_input6, 784), scale=(1.0 / np.sqrt(hidden_input6 / 2.0))
)
* 0.002
)
# G_b2 = np.random.normal(size=(784),scale=(1. / np.sqrt(784 / 2.))) *0.002
G_b7 = np.zeros(784)
# 3. For Adam Optimzier
v1, m1 = 0, 0
v2, m2 = 0, 0
v3, m3 = 0, 0
v4, m4 = 0, 0
v5, m5 = 0, 0
v6, m6 = 0, 0
v7, m7 = 0, 0
v8, m8 = 0, 0
v9, m9 = 0, 0
v10, m10 = 0, 0
v11, m11 = 0, 0
v12, m12 = 0, 0
v13, m13 = 0, 0
v14, m14 = 0, 0
v15, m15 = 0, 0
v16, m16 = 0, 0
v17, m17 = 0, 0
v18, m18 = 0, 0
beta_1, beta_2, eps = 0.9, 0.999, 0.00000001
print("--------- Started Training ----------")
for iter in range(num_epoch):
random_int = np.random.randint(len(images) - 5)
current_image = np.expand_dims(images[random_int], axis=0)
# Func: Generate The first Fake Data
Z = np.random.uniform(-1.0, 1.0, size=[1, G_input])
Gl1 = Z.dot(G_W1) + G_b1
Gl1A = arctan(Gl1)
Gl2 = Gl1A.dot(G_W2) + G_b2
Gl2A = ReLu(Gl2)
Gl3 = Gl2A.dot(G_W3) + G_b3
Gl3A = arctan(Gl3)
Gl4 = Gl3A.dot(G_W4) + G_b4
Gl4A = ReLu(Gl4)
Gl5 = Gl4A.dot(G_W5) + G_b5
Gl5A = tanh(Gl5)
Gl6 = Gl5A.dot(G_W6) + G_b6
Gl6A = ReLu(Gl6)
Gl7 = Gl6A.dot(G_W7) + G_b7
current_fake_data = log(Gl7)
# Func: Forward Feed for Real data
Dl1_r = current_image.dot(D_W1) + D_b1
Dl1_rA = ReLu(Dl1_r)
Dl2_r = Dl1_rA.dot(D_W2) + D_b2
Dl2_rA = log(Dl2_r)
# Func: Forward Feed for Fake Data
Dl1_f = current_fake_data.dot(D_W1) + D_b1
Dl1_fA = ReLu(Dl1_f)
Dl2_f = Dl1_fA.dot(D_W2) + D_b2
Dl2_fA = log(Dl2_f)
# Func: Cost D
D_cost = -np.log(Dl2_rA) + np.log(1.0 - Dl2_fA)
# Func: Gradient
grad_f_w2_part_1 = 1 / (1.0 - Dl2_fA)
grad_f_w2_part_2 = d_log(Dl2_f)
grad_f_w2_part_3 = Dl1_fA
grad_f_w2 = grad_f_w2_part_3.T.dot(grad_f_w2_part_1 * grad_f_w2_part_2)
grad_f_b2 = grad_f_w2_part_1 * grad_f_w2_part_2
grad_f_w1_part_1 = (grad_f_w2_part_1 * grad_f_w2_part_2).dot(D_W2.T)
grad_f_w1_part_2 = d_ReLu(Dl1_f)
grad_f_w1_part_3 = current_fake_data
grad_f_w1 = grad_f_w1_part_3.T.dot(grad_f_w1_part_1 * grad_f_w1_part_2)
grad_f_b1 = grad_f_w1_part_1 * grad_f_w1_part_2
grad_r_w2_part_1 = -1 / Dl2_rA
grad_r_w2_part_2 = d_log(Dl2_r)
grad_r_w2_part_3 = Dl1_rA
grad_r_w2 = grad_r_w2_part_3.T.dot(grad_r_w2_part_1 * grad_r_w2_part_2)
grad_r_b2 = grad_r_w2_part_1 * grad_r_w2_part_2
grad_r_w1_part_1 = (grad_r_w2_part_1 * grad_r_w2_part_2).dot(D_W2.T)
grad_r_w1_part_2 = d_ReLu(Dl1_r)
grad_r_w1_part_3 = current_image
grad_r_w1 = grad_r_w1_part_3.T.dot(grad_r_w1_part_1 * grad_r_w1_part_2)
grad_r_b1 = grad_r_w1_part_1 * grad_r_w1_part_2
grad_w1 = grad_f_w1 + grad_r_w1
grad_b1 = grad_f_b1 + grad_r_b1
grad_w2 = grad_f_w2 + grad_r_w2
grad_b2 = grad_f_b2 + grad_r_b2
# ---- Update Gradient ----
m1 = beta_1 * m1 + (1 - beta_1) * grad_w1
v1 = beta_2 * v1 + (1 - beta_2) * grad_w1 ** 2
m2 = beta_1 * m2 + (1 - beta_1) * grad_b1
v2 = beta_2 * v2 + (1 - beta_2) * grad_b1 ** 2
m3 = beta_1 * m3 + (1 - beta_1) * grad_w2
v3 = beta_2 * v3 + (1 - beta_2) * grad_w2 ** 2
m4 = beta_1 * m4 + (1 - beta_1) * grad_b2
v4 = beta_2 * v4 + (1 - beta_2) * grad_b2 ** 2
D_W1 = D_W1 - (learing_rate / (np.sqrt(v1 / (1 - beta_2)) + eps)) * (
m1 / (1 - beta_1)
)
D_b1 = D_b1 - (learing_rate / (np.sqrt(v2 / (1 - beta_2)) + eps)) * (
m2 / (1 - beta_1)
)
D_W2 = D_W2 - (learing_rate / (np.sqrt(v3 / (1 - beta_2)) + eps)) * (
m3 / (1 - beta_1)
)
D_b2 = D_b2 - (learing_rate / (np.sqrt(v4 / (1 - beta_2)) + eps)) * (
m4 / (1 - beta_1)
)
# Func: Forward Feed for G
Z = np.random.uniform(-1.0, 1.0, size=[1, G_input])
Gl1 = Z.dot(G_W1) + G_b1
Gl1A = arctan(Gl1)
Gl2 = Gl1A.dot(G_W2) + G_b2
Gl2A = ReLu(Gl2)
Gl3 = Gl2A.dot(G_W3) + G_b3
Gl3A = arctan(Gl3)
Gl4 = Gl3A.dot(G_W4) + G_b4
Gl4A = ReLu(Gl4)
Gl5 = Gl4A.dot(G_W5) + G_b5
Gl5A = tanh(Gl5)
Gl6 = Gl5A.dot(G_W6) + G_b6
Gl6A = ReLu(Gl6)
Gl7 = Gl6A.dot(G_W7) + G_b7
current_fake_data = log(Gl7)
Dl1 = current_fake_data.dot(D_W1) + D_b1
Dl1_A = ReLu(Dl1)
Dl2 = Dl1_A.dot(D_W2) + D_b2
Dl2_A = log(Dl2)
# Func: Cost G
G_cost = -np.log(Dl2_A)
# Func: Gradient
grad_G_w7_part_1 = ((-1 / Dl2_A) * d_log(Dl2).dot(D_W2.T) * (d_ReLu(Dl1))).dot(
D_W1.T
)
grad_G_w7_part_2 = d_log(Gl7)
grad_G_w7_part_3 = Gl6A
grad_G_w7 = grad_G_w7_part_3.T.dot(grad_G_w7_part_1 * grad_G_w7_part_1)
grad_G_b7 = grad_G_w7_part_1 * grad_G_w7_part_2
grad_G_w6_part_1 = (grad_G_w7_part_1 * grad_G_w7_part_2).dot(G_W7.T)
grad_G_w6_part_2 = d_ReLu(Gl6)
grad_G_w6_part_3 = Gl5A
grad_G_w6 = grad_G_w6_part_3.T.dot(grad_G_w6_part_1 * grad_G_w6_part_2)
grad_G_b6 = grad_G_w6_part_1 * grad_G_w6_part_2
grad_G_w5_part_1 = (grad_G_w6_part_1 * grad_G_w6_part_2).dot(G_W6.T)
grad_G_w5_part_2 = d_tanh(Gl5)
grad_G_w5_part_3 = Gl4A
grad_G_w5 = grad_G_w5_part_3.T.dot(grad_G_w5_part_1 * grad_G_w5_part_2)
grad_G_b5 = grad_G_w5_part_1 * grad_G_w5_part_2
grad_G_w4_part_1 = (grad_G_w5_part_1 * grad_G_w5_part_2).dot(G_W5.T)
grad_G_w4_part_2 = d_ReLu(Gl4)
grad_G_w4_part_3 = Gl3A
grad_G_w4 = grad_G_w4_part_3.T.dot(grad_G_w4_part_1 * grad_G_w4_part_2)
grad_G_b4 = grad_G_w4_part_1 * grad_G_w4_part_2
grad_G_w3_part_1 = (grad_G_w4_part_1 * grad_G_w4_part_2).dot(G_W4.T)
grad_G_w3_part_2 = d_arctan(Gl3)
grad_G_w3_part_3 = Gl2A
grad_G_w3 = grad_G_w3_part_3.T.dot(grad_G_w3_part_1 * grad_G_w3_part_2)
grad_G_b3 = grad_G_w3_part_1 * grad_G_w3_part_2
grad_G_w2_part_1 = (grad_G_w3_part_1 * grad_G_w3_part_2).dot(G_W3.T)
grad_G_w2_part_2 = d_ReLu(Gl2)
grad_G_w2_part_3 = Gl1A
grad_G_w2 = grad_G_w2_part_3.T.dot(grad_G_w2_part_1 * grad_G_w2_part_2)
grad_G_b2 = grad_G_w2_part_1 * grad_G_w2_part_2
grad_G_w1_part_1 = (grad_G_w2_part_1 * grad_G_w2_part_2).dot(G_W2.T)
grad_G_w1_part_2 = d_arctan(Gl1)
grad_G_w1_part_3 = Z
grad_G_w1 = grad_G_w1_part_3.T.dot(grad_G_w1_part_1 * grad_G_w1_part_2)
grad_G_b1 = grad_G_w1_part_1 * grad_G_w1_part_2
# ---- Update Gradient ----
m5 = beta_1 * m5 + (1 - beta_1) * grad_G_w1
v5 = beta_2 * v5 + (1 - beta_2) * grad_G_w1 ** 2
m6 = beta_1 * m6 + (1 - beta_1) * grad_G_b1
v6 = beta_2 * v6 + (1 - beta_2) * grad_G_b1 ** 2
m7 = beta_1 * m7 + (1 - beta_1) * grad_G_w2
v7 = beta_2 * v7 + (1 - beta_2) * grad_G_w2 ** 2
m8 = beta_1 * m8 + (1 - beta_1) * grad_G_b2
v8 = beta_2 * v8 + (1 - beta_2) * grad_G_b2 ** 2
m9 = beta_1 * m9 + (1 - beta_1) * grad_G_w3
v9 = beta_2 * v9 + (1 - beta_2) * grad_G_w3 ** 2
m10 = beta_1 * m10 + (1 - beta_1) * grad_G_b3
v10 = beta_2 * v10 + (1 - beta_2) * grad_G_b3 ** 2
m11 = beta_1 * m11 + (1 - beta_1) * grad_G_w4
v11 = beta_2 * v11 + (1 - beta_2) * grad_G_w4 ** 2
m12 = beta_1 * m12 + (1 - beta_1) * grad_G_b4
v12 = beta_2 * v12 + (1 - beta_2) * grad_G_b4 ** 2
m13 = beta_1 * m13 + (1 - beta_1) * grad_G_w5
v13 = beta_2 * v13 + (1 - beta_2) * grad_G_w5 ** 2
m14 = beta_1 * m14 + (1 - beta_1) * grad_G_b5
v14 = beta_2 * v14 + (1 - beta_2) * grad_G_b5 ** 2
m15 = beta_1 * m15 + (1 - beta_1) * grad_G_w6
v15 = beta_2 * v15 + (1 - beta_2) * grad_G_w6 ** 2
m16 = beta_1 * m16 + (1 - beta_1) * grad_G_b6
v16 = beta_2 * v16 + (1 - beta_2) * grad_G_b6 ** 2
m17 = beta_1 * m17 + (1 - beta_1) * grad_G_w7
v17 = beta_2 * v17 + (1 - beta_2) * grad_G_w7 ** 2
m18 = beta_1 * m18 + (1 - beta_1) * grad_G_b7
v18 = beta_2 * v18 + (1 - beta_2) * grad_G_b7 ** 2
G_W1 = G_W1 - (learing_rate / (np.sqrt(v5 / (1 - beta_2)) + eps)) * (
m5 / (1 - beta_1)
)
G_b1 = G_b1 - (learing_rate / (np.sqrt(v6 / (1 - beta_2)) + eps)) * (
m6 / (1 - beta_1)
)
G_W2 = G_W2 - (learing_rate / (np.sqrt(v7 / (1 - beta_2)) + eps)) * (
m7 / (1 - beta_1)
)
G_b2 = G_b2 - (learing_rate / (np.sqrt(v8 / (1 - beta_2)) + eps)) * (
m8 / (1 - beta_1)
)
G_W3 = G_W3 - (learing_rate / (np.sqrt(v9 / (1 - beta_2)) + eps)) * (
m9 / (1 - beta_1)
)
G_b3 = G_b3 - (learing_rate / (np.sqrt(v10 / (1 - beta_2)) + eps)) * (
m10 / (1 - beta_1)
)
G_W4 = G_W4 - (learing_rate / (np.sqrt(v11 / (1 - beta_2)) + eps)) * (
m11 / (1 - beta_1)
)
G_b4 = G_b4 - (learing_rate / (np.sqrt(v12 / (1 - beta_2)) + eps)) * (
m12 / (1 - beta_1)
)
G_W5 = G_W5 - (learing_rate / (np.sqrt(v13 / (1 - beta_2)) + eps)) * (
m13 / (1 - beta_1)
)
G_b5 = G_b5 - (learing_rate / (np.sqrt(v14 / (1 - beta_2)) + eps)) * (
m14 / (1 - beta_1)
)
G_W6 = G_W6 - (learing_rate / (np.sqrt(v15 / (1 - beta_2)) + eps)) * (
m15 / (1 - beta_1)
)
G_b6 = G_b6 - (learing_rate / (np.sqrt(v16 / (1 - beta_2)) + eps)) * (
m16 / (1 - beta_1)
)
G_W7 = G_W7 - (learing_rate / (np.sqrt(v17 / (1 - beta_2)) + eps)) * (
m17 / (1 - beta_1)
)
G_b7 = G_b7 - (learing_rate / (np.sqrt(v18 / (1 - beta_2)) + eps)) * (
m18 / (1 - beta_1)
)
# --- Print Error ----
# print("Current Iter: ",iter, " Current D cost:",D_cost, " Current G cost: ", G_cost,end='\r')
if iter == 0:
learing_rate = learing_rate * 0.01
if iter == 40:
learing_rate = learing_rate * 0.01
# ---- Print to Out put ----
if iter % 10 == 0:
print(
"Current Iter: ",
iter,
" Current D cost:",
D_cost,
" Current G cost: ",
G_cost,
end="\r",
)
print("--------- Show Example Result See Tab Above ----------")
print("--------- Wait for the image to load ---------")
Z = np.random.uniform(-1.0, 1.0, size=[16, G_input])
Gl1 = Z.dot(G_W1) + G_b1
Gl1A = arctan(Gl1)
Gl2 = Gl1A.dot(G_W2) + G_b2
Gl2A = ReLu(Gl2)
Gl3 = Gl2A.dot(G_W3) + G_b3
Gl3A = arctan(Gl3)
Gl4 = Gl3A.dot(G_W4) + G_b4
Gl4A = ReLu(Gl4)
Gl5 = Gl4A.dot(G_W5) + G_b5
Gl5A = tanh(Gl5)
Gl6 = Gl5A.dot(G_W6) + G_b6
Gl6A = ReLu(Gl6)
Gl7 = Gl6A.dot(G_W7) + G_b7
current_fake_data = log(Gl7)
fig = plot(current_fake_data)
fig.savefig(
"Click_Me_{}.png".format(
str(iter).zfill(3)
+ "_Ginput_"
+ str(G_input)
+ "_hiddenone"
+ str(hidden_input)
+ "_hiddentwo"
+ str(hidden_input2)
+ "_LR_"
+ str(learing_rate)
),
bbox_inches="tight",
)
# for complete explanation visit https://towardsdatascience.com/only-numpy-implementing-gan-general-adversarial-networks-and-adam-optimizer-using-numpy-with-2a7e4e032021
# -- end code --
| -1 |
TheAlgorithms/Python | 9,576 | Upgrade to Python 3.12 | ### Describe your change:
Repeats #8777
Repeats #9306
* #8777
* #9306
**Disables** algorithms based on `qiskit` and `tensorflow` because those modules are not yet compatible with Python 3.12.
* https://github.com/tensorflow/tensorflow/releases
* https://github.com/Qiskit/qiskit/issues/10887
---
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
| cclauss | "2023-10-03T08:13:43Z" | "2023-10-03T09:17:11Z" | f964dcbf2ff7c70e4aca20532a38dfb02ce8a4c0 | 0f4e51245f33175b4fb311f633d3821210741bdd | Upgrade to Python 3.12. ### Describe your change:
Repeats #8777
Repeats #9306
* #8777
* #9306
**Disables** algorithms based on `qiskit` and `tensorflow` because those modules are not yet compatible with Python 3.12.
* https://github.com/tensorflow/tensorflow/releases
* https://github.com/Qiskit/qiskit/issues/10887
---
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
| -1 |
||
TheAlgorithms/Python | 9,576 | Upgrade to Python 3.12 | ### Describe your change:
Repeats #8777
Repeats #9306
* #8777
* #9306
**Disables** algorithms based on `qiskit` and `tensorflow` because those modules are not yet compatible with Python 3.12.
* https://github.com/tensorflow/tensorflow/releases
* https://github.com/Qiskit/qiskit/issues/10887
---
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
| cclauss | "2023-10-03T08:13:43Z" | "2023-10-03T09:17:11Z" | f964dcbf2ff7c70e4aca20532a38dfb02ce8a4c0 | 0f4e51245f33175b4fb311f633d3821210741bdd | Upgrade to Python 3.12. ### Describe your change:
Repeats #8777
Repeats #9306
* #8777
* #9306
**Disables** algorithms based on `qiskit` and `tensorflow` because those modules are not yet compatible with Python 3.12.
* https://github.com/tensorflow/tensorflow/releases
* https://github.com/Qiskit/qiskit/issues/10887
---
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
| {
"_comment": "The following is a representative average of the Equal Loudness Contours as measured by Robinson and Dadson, 1956",
"_doi": "10.1088/0508-3443/7/5/302",
"frequencies": [
0,
20,
30,
40,
50,
60,
70,
80,
90,
100,
200,
300,
400,
500,
600,
700,
800,
900,
1000,
1500,
2000,
2500,
3000,
3700,
4000,
5000,
6000,
7000,
8000,
9000,
10000,
12000,
15000,
20000
],
"gains": [
120,
113,
103,
97,
93,
91,
89,
87,
86,
85,
78,
76,
76,
76,
76,
77,
78,
79.5,
80,
79,
77,
74,
71.5,
70,
70.5,
74,
79,
84,
86,
86,
85,
95,
110,
125
]
}
| {
"_comment": "The following is a representative average of the Equal Loudness Contours as measured by Robinson and Dadson, 1956",
"_doi": "10.1088/0508-3443/7/5/302",
"frequencies": [
0,
20,
30,
40,
50,
60,
70,
80,
90,
100,
200,
300,
400,
500,
600,
700,
800,
900,
1000,
1500,
2000,
2500,
3000,
3700,
4000,
5000,
6000,
7000,
8000,
9000,
10000,
12000,
15000,
20000
],
"gains": [
120,
113,
103,
97,
93,
91,
89,
87,
86,
85,
78,
76,
76,
76,
76,
77,
78,
79.5,
80,
79,
77,
74,
71.5,
70,
70.5,
74,
79,
84,
86,
86,
85,
95,
110,
125
]
}
| -1 |
TheAlgorithms/Python | 9,576 | Upgrade to Python 3.12 | ### Describe your change:
Repeats #8777
Repeats #9306
* #8777
* #9306
**Disables** algorithms based on `qiskit` and `tensorflow` because those modules are not yet compatible with Python 3.12.
* https://github.com/tensorflow/tensorflow/releases
* https://github.com/Qiskit/qiskit/issues/10887
---
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
| cclauss | "2023-10-03T08:13:43Z" | "2023-10-03T09:17:11Z" | f964dcbf2ff7c70e4aca20532a38dfb02ce8a4c0 | 0f4e51245f33175b4fb311f633d3821210741bdd | Upgrade to Python 3.12. ### Describe your change:
Repeats #8777
Repeats #9306
* #8777
* #9306
**Disables** algorithms based on `qiskit` and `tensorflow` because those modules are not yet compatible with Python 3.12.
* https://github.com/tensorflow/tensorflow/releases
* https://github.com/Qiskit/qiskit/issues/10887
---
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
| -1 |
||
TheAlgorithms/Python | 9,576 | Upgrade to Python 3.12 | ### Describe your change:
Repeats #8777
Repeats #9306
* #8777
* #9306
**Disables** algorithms based on `qiskit` and `tensorflow` because those modules are not yet compatible with Python 3.12.
* https://github.com/tensorflow/tensorflow/releases
* https://github.com/Qiskit/qiskit/issues/10887
---
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
| cclauss | "2023-10-03T08:13:43Z" | "2023-10-03T09:17:11Z" | f964dcbf2ff7c70e4aca20532a38dfb02ce8a4c0 | 0f4e51245f33175b4fb311f633d3821210741bdd | Upgrade to Python 3.12. ### Describe your change:
Repeats #8777
Repeats #9306
* #8777
* #9306
**Disables** algorithms based on `qiskit` and `tensorflow` because those modules are not yet compatible with Python 3.12.
* https://github.com/tensorflow/tensorflow/releases
* https://github.com/Qiskit/qiskit/issues/10887
---
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
| """
Euler Problem 26
https://projecteuler.net/problem=26
Problem Statement:
A unit fraction contains 1 in the numerator. The decimal representation of the
unit fractions with denominators 2 to 10 are given:
1/2 = 0.5
1/3 = 0.(3)
1/4 = 0.25
1/5 = 0.2
1/6 = 0.1(6)
1/7 = 0.(142857)
1/8 = 0.125
1/9 = 0.(1)
1/10 = 0.1
Where 0.1(6) means 0.166666..., and has a 1-digit recurring cycle. It can be
seen that 1/7 has a 6-digit recurring cycle.
Find the value of d < 1000 for which 1/d contains the longest recurring cycle
in its decimal fraction part.
"""
def solution(numerator: int = 1, digit: int = 1000) -> int:
"""
Considering any range can be provided,
because as per the problem, the digit d < 1000
>>> solution(1, 10)
7
>>> solution(10, 100)
97
>>> solution(10, 1000)
983
"""
the_digit = 1
longest_list_length = 0
for divide_by_number in range(numerator, digit + 1):
has_been_divided: list[int] = []
now_divide = numerator
for _ in range(1, digit + 1):
if now_divide in has_been_divided:
if longest_list_length < len(has_been_divided):
longest_list_length = len(has_been_divided)
the_digit = divide_by_number
else:
has_been_divided.append(now_divide)
now_divide = now_divide * 10 % divide_by_number
return the_digit
# Tests
if __name__ == "__main__":
import doctest
doctest.testmod()
| """
Euler Problem 26
https://projecteuler.net/problem=26
Problem Statement:
A unit fraction contains 1 in the numerator. The decimal representation of the
unit fractions with denominators 2 to 10 are given:
1/2 = 0.5
1/3 = 0.(3)
1/4 = 0.25
1/5 = 0.2
1/6 = 0.1(6)
1/7 = 0.(142857)
1/8 = 0.125
1/9 = 0.(1)
1/10 = 0.1
Where 0.1(6) means 0.166666..., and has a 1-digit recurring cycle. It can be
seen that 1/7 has a 6-digit recurring cycle.
Find the value of d < 1000 for which 1/d contains the longest recurring cycle
in its decimal fraction part.
"""
def solution(numerator: int = 1, digit: int = 1000) -> int:
"""
Considering any range can be provided,
because as per the problem, the digit d < 1000
>>> solution(1, 10)
7
>>> solution(10, 100)
97
>>> solution(10, 1000)
983
"""
the_digit = 1
longest_list_length = 0
for divide_by_number in range(numerator, digit + 1):
has_been_divided: list[int] = []
now_divide = numerator
for _ in range(1, digit + 1):
if now_divide in has_been_divided:
if longest_list_length < len(has_been_divided):
longest_list_length = len(has_been_divided)
the_digit = divide_by_number
else:
has_been_divided.append(now_divide)
now_divide = now_divide * 10 % divide_by_number
return the_digit
# Tests
if __name__ == "__main__":
import doctest
doctest.testmod()
| -1 |
TheAlgorithms/Python | 9,576 | Upgrade to Python 3.12 | ### Describe your change:
Repeats #8777
Repeats #9306
* #8777
* #9306
**Disables** algorithms based on `qiskit` and `tensorflow` because those modules are not yet compatible with Python 3.12.
* https://github.com/tensorflow/tensorflow/releases
* https://github.com/Qiskit/qiskit/issues/10887
---
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
| cclauss | "2023-10-03T08:13:43Z" | "2023-10-03T09:17:11Z" | f964dcbf2ff7c70e4aca20532a38dfb02ce8a4c0 | 0f4e51245f33175b4fb311f633d3821210741bdd | Upgrade to Python 3.12. ### Describe your change:
Repeats #8777
Repeats #9306
* #8777
* #9306
**Disables** algorithms based on `qiskit` and `tensorflow` because those modules are not yet compatible with Python 3.12.
* https://github.com/tensorflow/tensorflow/releases
* https://github.com/Qiskit/qiskit/issues/10887
---
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
| -1 |
||
TheAlgorithms/Python | 9,576 | Upgrade to Python 3.12 | ### Describe your change:
Repeats #8777
Repeats #9306
* #8777
* #9306
**Disables** algorithms based on `qiskit` and `tensorflow` because those modules are not yet compatible with Python 3.12.
* https://github.com/tensorflow/tensorflow/releases
* https://github.com/Qiskit/qiskit/issues/10887
---
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
| cclauss | "2023-10-03T08:13:43Z" | "2023-10-03T09:17:11Z" | f964dcbf2ff7c70e4aca20532a38dfb02ce8a4c0 | 0f4e51245f33175b4fb311f633d3821210741bdd | Upgrade to Python 3.12. ### Describe your change:
Repeats #8777
Repeats #9306
* #8777
* #9306
**Disables** algorithms based on `qiskit` and `tensorflow` because those modules are not yet compatible with Python 3.12.
* https://github.com/tensorflow/tensorflow/releases
* https://github.com/Qiskit/qiskit/issues/10887
---
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
| -1 |
||
TheAlgorithms/Python | 9,576 | Upgrade to Python 3.12 | ### Describe your change:
Repeats #8777
Repeats #9306
* #8777
* #9306
**Disables** algorithms based on `qiskit` and `tensorflow` because those modules are not yet compatible with Python 3.12.
* https://github.com/tensorflow/tensorflow/releases
* https://github.com/Qiskit/qiskit/issues/10887
---
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
| cclauss | "2023-10-03T08:13:43Z" | "2023-10-03T09:17:11Z" | f964dcbf2ff7c70e4aca20532a38dfb02ce8a4c0 | 0f4e51245f33175b4fb311f633d3821210741bdd | Upgrade to Python 3.12. ### Describe your change:
Repeats #8777
Repeats #9306
* #8777
* #9306
**Disables** algorithms based on `qiskit` and `tensorflow` because those modules are not yet compatible with Python 3.12.
* https://github.com/tensorflow/tensorflow/releases
* https://github.com/Qiskit/qiskit/issues/10887
---
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
| from __future__ import annotations
from abc import abstractmethod
from collections.abc import Iterable
from typing import Generic, Protocol, TypeVar
class Comparable(Protocol):
@abstractmethod
def __lt__(self: T, other: T) -> bool:
pass
@abstractmethod
def __gt__(self: T, other: T) -> bool:
pass
@abstractmethod
def __eq__(self: T, other: object) -> bool:
pass
T = TypeVar("T", bound=Comparable)
class Heap(Generic[T]):
"""A Max Heap Implementation
>>> unsorted = [103, 9, 1, 7, 11, 15, 25, 201, 209, 107, 5]
>>> h = Heap()
>>> h.build_max_heap(unsorted)
>>> h
[209, 201, 25, 103, 107, 15, 1, 9, 7, 11, 5]
>>>
>>> h.extract_max()
209
>>> h
[201, 107, 25, 103, 11, 15, 1, 9, 7, 5]
>>>
>>> h.insert(100)
>>> h
[201, 107, 25, 103, 100, 15, 1, 9, 7, 5, 11]
>>>
>>> h.heap_sort()
>>> h
[1, 5, 7, 9, 11, 15, 25, 100, 103, 107, 201]
"""
def __init__(self) -> None:
self.h: list[T] = []
self.heap_size: int = 0
def __repr__(self) -> str:
return str(self.h)
def parent_index(self, child_idx: int) -> int | None:
"""return the parent index of given child"""
if child_idx > 0:
return (child_idx - 1) // 2
return None
def left_child_idx(self, parent_idx: int) -> int | None:
"""
return the left child index if the left child exists.
if not, return None.
"""
left_child_index = 2 * parent_idx + 1
if left_child_index < self.heap_size:
return left_child_index
return None
def right_child_idx(self, parent_idx: int) -> int | None:
"""
return the right child index if the right child exists.
if not, return None.
"""
right_child_index = 2 * parent_idx + 2
if right_child_index < self.heap_size:
return right_child_index
return None
def max_heapify(self, index: int) -> None:
"""
correct a single violation of the heap property in a subtree's root.
"""
if index < self.heap_size:
violation: int = index
left_child = self.left_child_idx(index)
right_child = self.right_child_idx(index)
# check which child is larger than its parent
if left_child is not None and self.h[left_child] > self.h[violation]:
violation = left_child
if right_child is not None and self.h[right_child] > self.h[violation]:
violation = right_child
# if violation indeed exists
if violation != index:
# swap to fix the violation
self.h[violation], self.h[index] = self.h[index], self.h[violation]
# fix the subsequent violation recursively if any
self.max_heapify(violation)
def build_max_heap(self, collection: Iterable[T]) -> None:
"""build max heap from an unsorted array"""
self.h = list(collection)
self.heap_size = len(self.h)
if self.heap_size > 1:
# max_heapify from right to left but exclude leaves (last level)
for i in range(self.heap_size // 2 - 1, -1, -1):
self.max_heapify(i)
def extract_max(self) -> T:
"""get and remove max from heap"""
if self.heap_size >= 2:
me = self.h[0]
self.h[0] = self.h.pop(-1)
self.heap_size -= 1
self.max_heapify(0)
return me
elif self.heap_size == 1:
self.heap_size -= 1
return self.h.pop(-1)
else:
raise Exception("Empty heap")
def insert(self, value: T) -> None:
"""insert a new value into the max heap"""
self.h.append(value)
idx = (self.heap_size - 1) // 2
self.heap_size += 1
while idx >= 0:
self.max_heapify(idx)
idx = (idx - 1) // 2
def heap_sort(self) -> None:
size = self.heap_size
for j in range(size - 1, 0, -1):
self.h[0], self.h[j] = self.h[j], self.h[0]
self.heap_size -= 1
self.max_heapify(0)
self.heap_size = size
if __name__ == "__main__":
import doctest
# run doc test
doctest.testmod()
# demo
for unsorted in [
[0],
[2],
[3, 5],
[5, 3],
[5, 5],
[0, 0, 0, 0],
[1, 1, 1, 1],
[2, 2, 3, 5],
[0, 2, 2, 3, 5],
[2, 5, 3, 0, 2, 3, 0, 3],
[6, 1, 2, 7, 9, 3, 4, 5, 10, 8],
[103, 9, 1, 7, 11, 15, 25, 201, 209, 107, 5],
[-45, -2, -5],
]:
print(f"unsorted array: {unsorted}")
heap: Heap[int] = Heap()
heap.build_max_heap(unsorted)
print(f"after build heap: {heap}")
print(f"max value: {heap.extract_max()}")
print(f"after max value removed: {heap}")
heap.insert(100)
print(f"after new value 100 inserted: {heap}")
heap.heap_sort()
print(f"heap-sorted array: {heap}\n")
| from __future__ import annotations
from abc import abstractmethod
from collections.abc import Iterable
from typing import Generic, Protocol, TypeVar
class Comparable(Protocol):
@abstractmethod
def __lt__(self: T, other: T) -> bool:
pass
@abstractmethod
def __gt__(self: T, other: T) -> bool:
pass
@abstractmethod
def __eq__(self: T, other: object) -> bool:
pass
T = TypeVar("T", bound=Comparable)
class Heap(Generic[T]):
"""A Max Heap Implementation
>>> unsorted = [103, 9, 1, 7, 11, 15, 25, 201, 209, 107, 5]
>>> h = Heap()
>>> h.build_max_heap(unsorted)
>>> h
[209, 201, 25, 103, 107, 15, 1, 9, 7, 11, 5]
>>>
>>> h.extract_max()
209
>>> h
[201, 107, 25, 103, 11, 15, 1, 9, 7, 5]
>>>
>>> h.insert(100)
>>> h
[201, 107, 25, 103, 100, 15, 1, 9, 7, 5, 11]
>>>
>>> h.heap_sort()
>>> h
[1, 5, 7, 9, 11, 15, 25, 100, 103, 107, 201]
"""
def __init__(self) -> None:
self.h: list[T] = []
self.heap_size: int = 0
def __repr__(self) -> str:
return str(self.h)
def parent_index(self, child_idx: int) -> int | None:
"""return the parent index of given child"""
if child_idx > 0:
return (child_idx - 1) // 2
return None
def left_child_idx(self, parent_idx: int) -> int | None:
"""
return the left child index if the left child exists.
if not, return None.
"""
left_child_index = 2 * parent_idx + 1
if left_child_index < self.heap_size:
return left_child_index
return None
def right_child_idx(self, parent_idx: int) -> int | None:
"""
return the right child index if the right child exists.
if not, return None.
"""
right_child_index = 2 * parent_idx + 2
if right_child_index < self.heap_size:
return right_child_index
return None
def max_heapify(self, index: int) -> None:
"""
correct a single violation of the heap property in a subtree's root.
"""
if index < self.heap_size:
violation: int = index
left_child = self.left_child_idx(index)
right_child = self.right_child_idx(index)
# check which child is larger than its parent
if left_child is not None and self.h[left_child] > self.h[violation]:
violation = left_child
if right_child is not None and self.h[right_child] > self.h[violation]:
violation = right_child
# if violation indeed exists
if violation != index:
# swap to fix the violation
self.h[violation], self.h[index] = self.h[index], self.h[violation]
# fix the subsequent violation recursively if any
self.max_heapify(violation)
def build_max_heap(self, collection: Iterable[T]) -> None:
"""build max heap from an unsorted array"""
self.h = list(collection)
self.heap_size = len(self.h)
if self.heap_size > 1:
# max_heapify from right to left but exclude leaves (last level)
for i in range(self.heap_size // 2 - 1, -1, -1):
self.max_heapify(i)
def extract_max(self) -> T:
"""get and remove max from heap"""
if self.heap_size >= 2:
me = self.h[0]
self.h[0] = self.h.pop(-1)
self.heap_size -= 1
self.max_heapify(0)
return me
elif self.heap_size == 1:
self.heap_size -= 1
return self.h.pop(-1)
else:
raise Exception("Empty heap")
def insert(self, value: T) -> None:
"""insert a new value into the max heap"""
self.h.append(value)
idx = (self.heap_size - 1) // 2
self.heap_size += 1
while idx >= 0:
self.max_heapify(idx)
idx = (idx - 1) // 2
def heap_sort(self) -> None:
size = self.heap_size
for j in range(size - 1, 0, -1):
self.h[0], self.h[j] = self.h[j], self.h[0]
self.heap_size -= 1
self.max_heapify(0)
self.heap_size = size
if __name__ == "__main__":
import doctest
# run doc test
doctest.testmod()
# demo
for unsorted in [
[0],
[2],
[3, 5],
[5, 3],
[5, 5],
[0, 0, 0, 0],
[1, 1, 1, 1],
[2, 2, 3, 5],
[0, 2, 2, 3, 5],
[2, 5, 3, 0, 2, 3, 0, 3],
[6, 1, 2, 7, 9, 3, 4, 5, 10, 8],
[103, 9, 1, 7, 11, 15, 25, 201, 209, 107, 5],
[-45, -2, -5],
]:
print(f"unsorted array: {unsorted}")
heap: Heap[int] = Heap()
heap.build_max_heap(unsorted)
print(f"after build heap: {heap}")
print(f"max value: {heap.extract_max()}")
print(f"after max value removed: {heap}")
heap.insert(100)
print(f"after new value 100 inserted: {heap}")
heap.heap_sort()
print(f"heap-sorted array: {heap}\n")
| -1 |
TheAlgorithms/Python | 9,576 | Upgrade to Python 3.12 | ### Describe your change:
Repeats #8777
Repeats #9306
* #8777
* #9306
**Disables** algorithms based on `qiskit` and `tensorflow` because those modules are not yet compatible with Python 3.12.
* https://github.com/tensorflow/tensorflow/releases
* https://github.com/Qiskit/qiskit/issues/10887
---
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
| cclauss | "2023-10-03T08:13:43Z" | "2023-10-03T09:17:11Z" | f964dcbf2ff7c70e4aca20532a38dfb02ce8a4c0 | 0f4e51245f33175b4fb311f633d3821210741bdd | Upgrade to Python 3.12. ### Describe your change:
Repeats #8777
Repeats #9306
* #8777
* #9306
**Disables** algorithms based on `qiskit` and `tensorflow` because those modules are not yet compatible with Python 3.12.
* https://github.com/tensorflow/tensorflow/releases
* https://github.com/Qiskit/qiskit/issues/10887
---
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
| # Project Euler
Problems are taken from https://projecteuler.net/, the Project Euler. [Problems are licensed under CC BY-NC-SA 4.0](https://projecteuler.net/copyright).
Project Euler is a series of challenging mathematical/computer programming problems that require more than just mathematical
insights to solve. Project Euler is ideal for mathematicians who are learning to code.
The solutions will be checked by our [automated testing on GitHub Actions](https://github.com/TheAlgorithms/Python/actions) with the help of [this script](https://github.com/TheAlgorithms/Python/blob/master/scripts/validate_solutions.py). The efficiency of your code is also checked. You can view the top 10 slowest solutions on GitHub Actions logs (under `slowest 10 durations`) and open a pull request to improve those solutions.
## Solution Guidelines
Welcome to [TheAlgorithms/Python](https://github.com/TheAlgorithms/Python)! Before reading the solution guidelines, make sure you read the whole [Contributing Guidelines](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md) as it won't be repeated in here. If you have any doubt on the guidelines, please feel free to [state it clearly in an issue](https://github.com/TheAlgorithms/Python/issues/new) or ask the community in [Gitter](https://gitter.im/TheAlgorithms/community). You can use the [template](https://github.com/TheAlgorithms/Python/blob/master/project_euler/README.md#solution-template) we have provided below as your starting point but be sure to read the [Coding Style](https://github.com/TheAlgorithms/Python/blob/master/project_euler/README.md#coding-style) part first.
### Coding Style
* Please maintain consistency in project directory and solution file names. Keep the following points in mind:
* Create a new directory only for the problems which do not exist yet.
* If you create a new directory, please create an empty `__init__.py` file inside it as well.
* Please name the project **directory** as `problem_<problem_number>` where `problem_number` should be filled with 0s so as to occupy 3 digits. Example: `problem_001`, `problem_002`, `problem_067`, `problem_145`, and so on.
* Please provide a link to the problem and other references, if used, in the **module-level docstring**.
* All imports should come ***after*** the module-level docstring.
* You can have as many helper functions as you want but there should be one main function called `solution` which should satisfy the conditions as stated below:
* It should contain positional argument(s) whose default value is the question input. Example: Please take a look at [Problem 1](https://projecteuler.net/problem=1) where the question is to *Find the sum of all the multiples of 3 or 5 below 1000.* In this case the main solution function will be `solution(limit: int = 1000)`.
* When the `solution` function is called without any arguments like so: `solution()`, it should return the answer to the problem.
* Every function, which includes all the helper functions, if any, and the main solution function, should have `doctest` in the function docstring along with a brief statement mentioning what the function is about.
* There should not be a `doctest` for testing the answer as that is done by our GitHub Actions build using this [script](https://github.com/TheAlgorithms/Python/blob/master/scripts/validate_solutions.py). Keeping in mind the above example of [Problem 1](https://projecteuler.net/problem=1):
```python
def solution(limit: int = 1000):
"""
A brief statement mentioning what the function is about.
You can have a detailed explanation about the solution method in the
module-level docstring.
>>> solution(1)
...
>>> solution(16)
...
>>> solution(100)
...
"""
```
### Solution Template
You can use the below template as your starting point but please read the [Coding Style](https://github.com/TheAlgorithms/Python/blob/master/project_euler/README.md#coding-style) first to understand how the template works.
Please change the name of the helper functions accordingly, change the parameter names with a descriptive one, replace the content within `[square brackets]` (including the brackets) with the appropriate content.
```python
"""
Project Euler Problem [problem number]: [link to the original problem]
... [Entire problem statement] ...
... [Solution explanation - Optional] ...
References [Optional]:
- [Wikipedia link to the topic]
- [Stackoverflow link]
...
"""
import module1
import module2
...
def helper1(arg1: [type hint], arg2: [type hint], ...) -> [Return type hint]:
"""
A brief statement explaining what the function is about.
... A more elaborate description ... [Optional]
...
[Doctest]
...
"""
...
# calculations
...
return
# You can have multiple helper functions but the solution function should be
# after all the helper functions ...
def solution(arg1: [type hint], arg2: [type hint], ...) -> [Return type hint]:
"""
A brief statement mentioning what the function is about.
You can have a detailed explanation about the solution in the
module-level docstring.
...
[Doctest as mentioned above]
...
"""
...
# calculations
...
return answer
if __name__ == "__main__":
print(f"{solution() = }")
```
| # Project Euler
Problems are taken from https://projecteuler.net/, the Project Euler. [Problems are licensed under CC BY-NC-SA 4.0](https://projecteuler.net/copyright).
Project Euler is a series of challenging mathematical/computer programming problems that require more than just mathematical
insights to solve. Project Euler is ideal for mathematicians who are learning to code.
The solutions will be checked by our [automated testing on GitHub Actions](https://github.com/TheAlgorithms/Python/actions) with the help of [this script](https://github.com/TheAlgorithms/Python/blob/master/scripts/validate_solutions.py). The efficiency of your code is also checked. You can view the top 10 slowest solutions on GitHub Actions logs (under `slowest 10 durations`) and open a pull request to improve those solutions.
## Solution Guidelines
Welcome to [TheAlgorithms/Python](https://github.com/TheAlgorithms/Python)! Before reading the solution guidelines, make sure you read the whole [Contributing Guidelines](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md) as it won't be repeated in here. If you have any doubt on the guidelines, please feel free to [state it clearly in an issue](https://github.com/TheAlgorithms/Python/issues/new) or ask the community in [Gitter](https://gitter.im/TheAlgorithms/community). You can use the [template](https://github.com/TheAlgorithms/Python/blob/master/project_euler/README.md#solution-template) we have provided below as your starting point but be sure to read the [Coding Style](https://github.com/TheAlgorithms/Python/blob/master/project_euler/README.md#coding-style) part first.
### Coding Style
* Please maintain consistency in project directory and solution file names. Keep the following points in mind:
* Create a new directory only for the problems which do not exist yet.
* If you create a new directory, please create an empty `__init__.py` file inside it as well.
* Please name the project **directory** as `problem_<problem_number>` where `problem_number` should be filled with 0s so as to occupy 3 digits. Example: `problem_001`, `problem_002`, `problem_067`, `problem_145`, and so on.
* Please provide a link to the problem and other references, if used, in the **module-level docstring**.
* All imports should come ***after*** the module-level docstring.
* You can have as many helper functions as you want but there should be one main function called `solution` which should satisfy the conditions as stated below:
* It should contain positional argument(s) whose default value is the question input. Example: Please take a look at [Problem 1](https://projecteuler.net/problem=1) where the question is to *Find the sum of all the multiples of 3 or 5 below 1000.* In this case the main solution function will be `solution(limit: int = 1000)`.
* When the `solution` function is called without any arguments like so: `solution()`, it should return the answer to the problem.
* Every function, which includes all the helper functions, if any, and the main solution function, should have `doctest` in the function docstring along with a brief statement mentioning what the function is about.
* There should not be a `doctest` for testing the answer as that is done by our GitHub Actions build using this [script](https://github.com/TheAlgorithms/Python/blob/master/scripts/validate_solutions.py). Keeping in mind the above example of [Problem 1](https://projecteuler.net/problem=1):
```python
def solution(limit: int = 1000):
"""
A brief statement mentioning what the function is about.
You can have a detailed explanation about the solution method in the
module-level docstring.
>>> solution(1)
...
>>> solution(16)
...
>>> solution(100)
...
"""
```
### Solution Template
You can use the below template as your starting point but please read the [Coding Style](https://github.com/TheAlgorithms/Python/blob/master/project_euler/README.md#coding-style) first to understand how the template works.
Please change the name of the helper functions accordingly, change the parameter names with a descriptive one, replace the content within `[square brackets]` (including the brackets) with the appropriate content.
```python
"""
Project Euler Problem [problem number]: [link to the original problem]
... [Entire problem statement] ...
... [Solution explanation - Optional] ...
References [Optional]:
- [Wikipedia link to the topic]
- [Stackoverflow link]
...
"""
import module1
import module2
...
def helper1(arg1: [type hint], arg2: [type hint], ...) -> [Return type hint]:
"""
A brief statement explaining what the function is about.
... A more elaborate description ... [Optional]
...
[Doctest]
...
"""
...
# calculations
...
return
# You can have multiple helper functions but the solution function should be
# after all the helper functions ...
def solution(arg1: [type hint], arg2: [type hint], ...) -> [Return type hint]:
"""
A brief statement mentioning what the function is about.
You can have a detailed explanation about the solution in the
module-level docstring.
...
[Doctest as mentioned above]
...
"""
...
# calculations
...
return answer
if __name__ == "__main__":
print(f"{solution() = }")
```
| -1 |
TheAlgorithms/Python | 9,576 | Upgrade to Python 3.12 | ### Describe your change:
Repeats #8777
Repeats #9306
* #8777
* #9306
**Disables** algorithms based on `qiskit` and `tensorflow` because those modules are not yet compatible with Python 3.12.
* https://github.com/tensorflow/tensorflow/releases
* https://github.com/Qiskit/qiskit/issues/10887
---
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
| cclauss | "2023-10-03T08:13:43Z" | "2023-10-03T09:17:11Z" | f964dcbf2ff7c70e4aca20532a38dfb02ce8a4c0 | 0f4e51245f33175b4fb311f633d3821210741bdd | Upgrade to Python 3.12. ### Describe your change:
Repeats #8777
Repeats #9306
* #8777
* #9306
**Disables** algorithms based on `qiskit` and `tensorflow` because those modules are not yet compatible with Python 3.12.
* https://github.com/tensorflow/tensorflow/releases
* https://github.com/Qiskit/qiskit/issues/10887
---
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
| """
Pure Python implementation of a binary search algorithm.
For doctests run following command:
python3 -m doctest -v simple_binary_search.py
For manual testing run:
python3 simple_binary_search.py
"""
from __future__ import annotations
def binary_search(a_list: list[int], item: int) -> bool:
"""
>>> test_list = [0, 1, 2, 8, 13, 17, 19, 32, 42]
>>> binary_search(test_list, 3)
False
>>> binary_search(test_list, 13)
True
>>> binary_search([4, 4, 5, 6, 7], 4)
True
>>> binary_search([4, 4, 5, 6, 7], -10)
False
>>> binary_search([-18, 2], -18)
True
>>> binary_search([5], 5)
True
>>> binary_search(['a', 'c', 'd'], 'c')
True
>>> binary_search(['a', 'c', 'd'], 'f')
False
>>> binary_search([], 1)
False
>>> binary_search([-.1, .1 , .8], .1)
True
>>> binary_search(range(-5000, 5000, 10), 80)
True
>>> binary_search(range(-5000, 5000, 10), 1255)
False
>>> binary_search(range(0, 10000, 5), 2)
False
"""
if len(a_list) == 0:
return False
midpoint = len(a_list) // 2
if a_list[midpoint] == item:
return True
if item < a_list[midpoint]:
return binary_search(a_list[:midpoint], item)
else:
return binary_search(a_list[midpoint + 1 :], item)
if __name__ == "__main__":
user_input = input("Enter numbers separated by comma:\n").strip()
sequence = [int(item.strip()) for item in user_input.split(",")]
target = int(input("Enter the number to be found in the list:\n").strip())
not_str = "" if binary_search(sequence, target) else "not "
print(f"{target} was {not_str}found in {sequence}")
| """
Pure Python implementation of a binary search algorithm.
For doctests run following command:
python3 -m doctest -v simple_binary_search.py
For manual testing run:
python3 simple_binary_search.py
"""
from __future__ import annotations
def binary_search(a_list: list[int], item: int) -> bool:
"""
>>> test_list = [0, 1, 2, 8, 13, 17, 19, 32, 42]
>>> binary_search(test_list, 3)
False
>>> binary_search(test_list, 13)
True
>>> binary_search([4, 4, 5, 6, 7], 4)
True
>>> binary_search([4, 4, 5, 6, 7], -10)
False
>>> binary_search([-18, 2], -18)
True
>>> binary_search([5], 5)
True
>>> binary_search(['a', 'c', 'd'], 'c')
True
>>> binary_search(['a', 'c', 'd'], 'f')
False
>>> binary_search([], 1)
False
>>> binary_search([-.1, .1 , .8], .1)
True
>>> binary_search(range(-5000, 5000, 10), 80)
True
>>> binary_search(range(-5000, 5000, 10), 1255)
False
>>> binary_search(range(0, 10000, 5), 2)
False
"""
if len(a_list) == 0:
return False
midpoint = len(a_list) // 2
if a_list[midpoint] == item:
return True
if item < a_list[midpoint]:
return binary_search(a_list[:midpoint], item)
else:
return binary_search(a_list[midpoint + 1 :], item)
if __name__ == "__main__":
user_input = input("Enter numbers separated by comma:\n").strip()
sequence = [int(item.strip()) for item in user_input.split(",")]
target = int(input("Enter the number to be found in the list:\n").strip())
not_str = "" if binary_search(sequence, target) else "not "
print(f"{target} was {not_str}found in {sequence}")
| -1 |
TheAlgorithms/Python | 9,576 | Upgrade to Python 3.12 | ### Describe your change:
Repeats #8777
Repeats #9306
* #8777
* #9306
**Disables** algorithms based on `qiskit` and `tensorflow` because those modules are not yet compatible with Python 3.12.
* https://github.com/tensorflow/tensorflow/releases
* https://github.com/Qiskit/qiskit/issues/10887
---
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
| cclauss | "2023-10-03T08:13:43Z" | "2023-10-03T09:17:11Z" | f964dcbf2ff7c70e4aca20532a38dfb02ce8a4c0 | 0f4e51245f33175b4fb311f633d3821210741bdd | Upgrade to Python 3.12. ### Describe your change:
Repeats #8777
Repeats #9306
* #8777
* #9306
**Disables** algorithms based on `qiskit` and `tensorflow` because those modules are not yet compatible with Python 3.12.
* https://github.com/tensorflow/tensorflow/releases
* https://github.com/Qiskit/qiskit/issues/10887
---
* [ ] Add an algorithm?
* [x] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
| """
Project Euler Problem 113: https://projecteuler.net/problem=113
Working from left-to-right if no digit is exceeded by the digit to its left it is
called an increasing number; for example, 134468.
Similarly if no digit is exceeded by the digit to its right it is called a decreasing
number; for example, 66420.
We shall call a positive integer that is neither increasing nor decreasing a
"bouncy" number; for example, 155349.
As n increases, the proportion of bouncy numbers below n increases such that there
are only 12951 numbers below one-million that are not bouncy and only 277032
non-bouncy numbers below 10^10.
How many numbers below a googol (10^100) are not bouncy?
"""
def choose(n: int, r: int) -> int:
"""
Calculate the binomial coefficient c(n,r) using the multiplicative formula.
>>> choose(4,2)
6
>>> choose(5,3)
10
>>> choose(20,6)
38760
"""
ret = 1.0
for i in range(1, r + 1):
ret *= (n + 1 - i) / i
return round(ret)
def non_bouncy_exact(n: int) -> int:
"""
Calculate the number of non-bouncy numbers with at most n digits.
>>> non_bouncy_exact(1)
9
>>> non_bouncy_exact(6)
7998
>>> non_bouncy_exact(10)
136126
"""
return choose(8 + n, n) + choose(9 + n, n) - 10
def non_bouncy_upto(n: int) -> int:
"""
Calculate the number of non-bouncy numbers with at most n digits.
>>> non_bouncy_upto(1)
9
>>> non_bouncy_upto(6)
12951
>>> non_bouncy_upto(10)
277032
"""
return sum(non_bouncy_exact(i) for i in range(1, n + 1))
def solution(num_digits: int = 100) -> int:
"""
Calculate the number of non-bouncy numbers less than a googol.
>>> solution(6)
12951
>>> solution(10)
277032
"""
return non_bouncy_upto(num_digits)
if __name__ == "__main__":
print(f"{solution() = }")
| """
Project Euler Problem 113: https://projecteuler.net/problem=113
Working from left-to-right if no digit is exceeded by the digit to its left it is
called an increasing number; for example, 134468.
Similarly if no digit is exceeded by the digit to its right it is called a decreasing
number; for example, 66420.
We shall call a positive integer that is neither increasing nor decreasing a
"bouncy" number; for example, 155349.
As n increases, the proportion of bouncy numbers below n increases such that there
are only 12951 numbers below one-million that are not bouncy and only 277032
non-bouncy numbers below 10^10.
How many numbers below a googol (10^100) are not bouncy?
"""
def choose(n: int, r: int) -> int:
"""
Calculate the binomial coefficient c(n,r) using the multiplicative formula.
>>> choose(4,2)
6
>>> choose(5,3)
10
>>> choose(20,6)
38760
"""
ret = 1.0
for i in range(1, r + 1):
ret *= (n + 1 - i) / i
return round(ret)
def non_bouncy_exact(n: int) -> int:
"""
Calculate the number of non-bouncy numbers with at most n digits.
>>> non_bouncy_exact(1)
9
>>> non_bouncy_exact(6)
7998
>>> non_bouncy_exact(10)
136126
"""
return choose(8 + n, n) + choose(9 + n, n) - 10
def non_bouncy_upto(n: int) -> int:
"""
Calculate the number of non-bouncy numbers with at most n digits.
>>> non_bouncy_upto(1)
9
>>> non_bouncy_upto(6)
12951
>>> non_bouncy_upto(10)
277032
"""
return sum(non_bouncy_exact(i) for i in range(1, n + 1))
def solution(num_digits: int = 100) -> int:
"""
Calculate the number of non-bouncy numbers less than a googol.
>>> solution(6)
12951
>>> solution(10)
277032
"""
return non_bouncy_upto(num_digits)
if __name__ == "__main__":
print(f"{solution() = }")
| -1 |
TheAlgorithms/Python | 9,543 | [pre-commit.ci] pre-commit autoupdate | <!--pre-commit.ci start-->
updates:
- [github.com/astral-sh/ruff-pre-commit: v0.0.291 β v0.0.292](https://github.com/astral-sh/ruff-pre-commit/compare/v0.0.291...v0.0.292)
- [github.com/codespell-project/codespell: v2.2.5 β v2.2.6](https://github.com/codespell-project/codespell/compare/v2.2.5...v2.2.6)
- [github.com/tox-dev/pyproject-fmt: 1.1.0 β 1.2.0](https://github.com/tox-dev/pyproject-fmt/compare/1.1.0...1.2.0)
<!--pre-commit.ci end--> | pre-commit-ci[bot] | "2023-10-02T23:32:55Z" | "2023-10-07T19:32:28Z" | 60291738d2552999545c414bb8a8e90f86c69678 | 895dffb412d80f29c65a062bf6d91fd2a70d8818 | [pre-commit.ci] pre-commit autoupdate. <!--pre-commit.ci start-->
updates:
- [github.com/astral-sh/ruff-pre-commit: v0.0.291 β v0.0.292](https://github.com/astral-sh/ruff-pre-commit/compare/v0.0.291...v0.0.292)
- [github.com/codespell-project/codespell: v2.2.5 β v2.2.6](https://github.com/codespell-project/codespell/compare/v2.2.5...v2.2.6)
- [github.com/tox-dev/pyproject-fmt: 1.1.0 β 1.2.0](https://github.com/tox-dev/pyproject-fmt/compare/1.1.0...1.2.0)
<!--pre-commit.ci end--> | repos:
- repo: https://github.com/pre-commit/pre-commit-hooks
rev: v4.4.0
hooks:
- id: check-executables-have-shebangs
- id: check-toml
- id: check-yaml
- id: end-of-file-fixer
types: [python]
- id: trailing-whitespace
- id: requirements-txt-fixer
- repo: https://github.com/MarcoGorelli/auto-walrus
rev: v0.2.2
hooks:
- id: auto-walrus
- repo: https://github.com/astral-sh/ruff-pre-commit
rev: v0.0.292
hooks:
- id: ruff
- repo: https://github.com/psf/black
rev: 23.9.1
hooks:
- id: black
- repo: https://github.com/codespell-project/codespell
rev: v2.2.5
hooks:
- id: codespell
additional_dependencies:
- tomli
- repo: https://github.com/tox-dev/pyproject-fmt
rev: "1.2.0"
hooks:
- id: pyproject-fmt
- repo: local
hooks:
- id: validate-filenames
name: Validate filenames
entry: ./scripts/validate_filenames.py
language: script
pass_filenames: false
- repo: https://github.com/abravalheri/validate-pyproject
rev: v0.14
hooks:
- id: validate-pyproject
- repo: https://github.com/pre-commit/mirrors-mypy
rev: v1.5.1
hooks:
- id: mypy
args:
- --ignore-missing-imports
- --install-types # See mirrors-mypy README.md
- --non-interactive
additional_dependencies: [types-requests]
| repos:
- repo: https://github.com/pre-commit/pre-commit-hooks
rev: v4.4.0
hooks:
- id: check-executables-have-shebangs
- id: check-toml
- id: check-yaml
- id: end-of-file-fixer
types: [python]
- id: trailing-whitespace
- id: requirements-txt-fixer
- repo: https://github.com/MarcoGorelli/auto-walrus
rev: v0.2.2
hooks:
- id: auto-walrus
- repo: https://github.com/astral-sh/ruff-pre-commit
rev: v0.0.292
hooks:
- id: ruff
- repo: https://github.com/psf/black
rev: 23.9.1
hooks:
- id: black
- repo: https://github.com/codespell-project/codespell
rev: v2.2.6
hooks:
- id: codespell
additional_dependencies:
- tomli
- repo: https://github.com/tox-dev/pyproject-fmt
rev: "1.2.0"
hooks:
- id: pyproject-fmt
- repo: local
hooks:
- id: validate-filenames
name: Validate filenames
entry: ./scripts/validate_filenames.py
language: script
pass_filenames: false
- repo: https://github.com/abravalheri/validate-pyproject
rev: v0.14
hooks:
- id: validate-pyproject
- repo: https://github.com/pre-commit/mirrors-mypy
rev: v1.5.1
hooks:
- id: mypy
args:
- --ignore-missing-imports
- --install-types # See mirrors-mypy README.md
- --non-interactive
additional_dependencies: [types-requests]
| 1 |
TheAlgorithms/Python | 9,543 | [pre-commit.ci] pre-commit autoupdate | <!--pre-commit.ci start-->
updates:
- [github.com/astral-sh/ruff-pre-commit: v0.0.291 β v0.0.292](https://github.com/astral-sh/ruff-pre-commit/compare/v0.0.291...v0.0.292)
- [github.com/codespell-project/codespell: v2.2.5 β v2.2.6](https://github.com/codespell-project/codespell/compare/v2.2.5...v2.2.6)
- [github.com/tox-dev/pyproject-fmt: 1.1.0 β 1.2.0](https://github.com/tox-dev/pyproject-fmt/compare/1.1.0...1.2.0)
<!--pre-commit.ci end--> | pre-commit-ci[bot] | "2023-10-02T23:32:55Z" | "2023-10-07T19:32:28Z" | 60291738d2552999545c414bb8a8e90f86c69678 | 895dffb412d80f29c65a062bf6d91fd2a70d8818 | [pre-commit.ci] pre-commit autoupdate. <!--pre-commit.ci start-->
updates:
- [github.com/astral-sh/ruff-pre-commit: v0.0.291 β v0.0.292](https://github.com/astral-sh/ruff-pre-commit/compare/v0.0.291...v0.0.292)
- [github.com/codespell-project/codespell: v2.2.5 β v2.2.6](https://github.com/codespell-project/codespell/compare/v2.2.5...v2.2.6)
- [github.com/tox-dev/pyproject-fmt: 1.1.0 β 1.2.0](https://github.com/tox-dev/pyproject-fmt/compare/1.1.0...1.2.0)
<!--pre-commit.ci end--> | """
Convolutional Neural Network
Objective : To train a CNN model detect if TB is present in Lung X-ray or not.
Resources CNN Theory :
https://en.wikipedia.org/wiki/Convolutional_neural_network
Resources Tensorflow : https://www.tensorflow.org/tutorials/images/cnn
Download dataset from :
https://lhncbc.nlm.nih.gov/LHC-publications/pubs/TuberculosisChestXrayImageDataSets.html
1. Download the dataset folder and create two folder training set and test set
in the parent dataste folder
2. Move 30-40 image from both TB positive and TB Negative folder
in the test set folder
3. The labels of the iamges will be extracted from the folder name
the image is present in.
"""
# Part 1 - Building the CNN
import numpy as np
# Importing the Keras libraries and packages
import tensorflow as tf
from tensorflow.keras import layers, models
if __name__ == "__main__":
# Initialising the CNN
# (Sequential- Building the model layer by layer)
classifier = models.Sequential()
# Step 1 - Convolution
# Here 64,64 is the length & breadth of dataset images and 3 is for the RGB channel
# (3,3) is the kernel size (filter matrix)
classifier.add(
layers.Conv2D(32, (3, 3), input_shape=(64, 64, 3), activation="relu")
)
# Step 2 - Pooling
classifier.add(layers.MaxPooling2D(pool_size=(2, 2)))
# Adding a second convolutional layer
classifier.add(layers.Conv2D(32, (3, 3), activation="relu"))
classifier.add(layers.MaxPooling2D(pool_size=(2, 2)))
# Step 3 - Flattening
classifier.add(layers.Flatten())
# Step 4 - Full connection
classifier.add(layers.Dense(units=128, activation="relu"))
classifier.add(layers.Dense(units=1, activation="sigmoid"))
# Compiling the CNN
classifier.compile(
optimizer="adam", loss="binary_crossentropy", metrics=["accuracy"]
)
# Part 2 - Fitting the CNN to the images
# Load Trained model weights
# from keras.models import load_model
# regressor=load_model('cnn.h5')
train_datagen = tf.keras.preprocessing.image.ImageDataGenerator(
rescale=1.0 / 255, shear_range=0.2, zoom_range=0.2, horizontal_flip=True
)
test_datagen = tf.keras.preprocessing.image.ImageDataGenerator(rescale=1.0 / 255)
training_set = train_datagen.flow_from_directory(
"dataset/training_set", target_size=(64, 64), batch_size=32, class_mode="binary"
)
test_set = test_datagen.flow_from_directory(
"dataset/test_set", target_size=(64, 64), batch_size=32, class_mode="binary"
)
classifier.fit_generator(
training_set, steps_per_epoch=5, epochs=30, validation_data=test_set
)
classifier.save("cnn.h5")
# Part 3 - Making new predictions
test_image = tf.keras.preprocessing.image.load_img(
"dataset/single_prediction/image.png", target_size=(64, 64)
)
test_image = tf.keras.preprocessing.image.img_to_array(test_image)
test_image = np.expand_dims(test_image, axis=0)
result = classifier.predict(test_image)
# training_set.class_indices
if result[0][0] == 0:
prediction = "Normal"
if result[0][0] == 1:
prediction = "Abnormality detected"
| """
Convolutional Neural Network
Objective : To train a CNN model detect if TB is present in Lung X-ray or not.
Resources CNN Theory :
https://en.wikipedia.org/wiki/Convolutional_neural_network
Resources Tensorflow : https://www.tensorflow.org/tutorials/images/cnn
Download dataset from :
https://lhncbc.nlm.nih.gov/LHC-publications/pubs/TuberculosisChestXrayImageDataSets.html
1. Download the dataset folder and create two folder training set and test set
in the parent dataset folder
2. Move 30-40 image from both TB positive and TB Negative folder
in the test set folder
3. The labels of the images will be extracted from the folder name
the image is present in.
"""
# Part 1 - Building the CNN
import numpy as np
# Importing the Keras libraries and packages
import tensorflow as tf
from tensorflow.keras import layers, models
if __name__ == "__main__":
# Initialising the CNN
# (Sequential- Building the model layer by layer)
classifier = models.Sequential()
# Step 1 - Convolution
# Here 64,64 is the length & breadth of dataset images and 3 is for the RGB channel
# (3,3) is the kernel size (filter matrix)
classifier.add(
layers.Conv2D(32, (3, 3), input_shape=(64, 64, 3), activation="relu")
)
# Step 2 - Pooling
classifier.add(layers.MaxPooling2D(pool_size=(2, 2)))
# Adding a second convolutional layer
classifier.add(layers.Conv2D(32, (3, 3), activation="relu"))
classifier.add(layers.MaxPooling2D(pool_size=(2, 2)))
# Step 3 - Flattening
classifier.add(layers.Flatten())
# Step 4 - Full connection
classifier.add(layers.Dense(units=128, activation="relu"))
classifier.add(layers.Dense(units=1, activation="sigmoid"))
# Compiling the CNN
classifier.compile(
optimizer="adam", loss="binary_crossentropy", metrics=["accuracy"]
)
# Part 2 - Fitting the CNN to the images
# Load Trained model weights
# from keras.models import load_model
# regressor=load_model('cnn.h5')
train_datagen = tf.keras.preprocessing.image.ImageDataGenerator(
rescale=1.0 / 255, shear_range=0.2, zoom_range=0.2, horizontal_flip=True
)
test_datagen = tf.keras.preprocessing.image.ImageDataGenerator(rescale=1.0 / 255)
training_set = train_datagen.flow_from_directory(
"dataset/training_set", target_size=(64, 64), batch_size=32, class_mode="binary"
)
test_set = test_datagen.flow_from_directory(
"dataset/test_set", target_size=(64, 64), batch_size=32, class_mode="binary"
)
classifier.fit_generator(
training_set, steps_per_epoch=5, epochs=30, validation_data=test_set
)
classifier.save("cnn.h5")
# Part 3 - Making new predictions
test_image = tf.keras.preprocessing.image.load_img(
"dataset/single_prediction/image.png", target_size=(64, 64)
)
test_image = tf.keras.preprocessing.image.img_to_array(test_image)
test_image = np.expand_dims(test_image, axis=0)
result = classifier.predict(test_image)
# training_set.class_indices
if result[0][0] == 0:
prediction = "Normal"
if result[0][0] == 1:
prediction = "Abnormality detected"
| 1 |
TheAlgorithms/Python | 9,543 | [pre-commit.ci] pre-commit autoupdate | <!--pre-commit.ci start-->
updates:
- [github.com/astral-sh/ruff-pre-commit: v0.0.291 β v0.0.292](https://github.com/astral-sh/ruff-pre-commit/compare/v0.0.291...v0.0.292)
- [github.com/codespell-project/codespell: v2.2.5 β v2.2.6](https://github.com/codespell-project/codespell/compare/v2.2.5...v2.2.6)
- [github.com/tox-dev/pyproject-fmt: 1.1.0 β 1.2.0](https://github.com/tox-dev/pyproject-fmt/compare/1.1.0...1.2.0)
<!--pre-commit.ci end--> | pre-commit-ci[bot] | "2023-10-02T23:32:55Z" | "2023-10-07T19:32:28Z" | 60291738d2552999545c414bb8a8e90f86c69678 | 895dffb412d80f29c65a062bf6d91fd2a70d8818 | [pre-commit.ci] pre-commit autoupdate. <!--pre-commit.ci start-->
updates:
- [github.com/astral-sh/ruff-pre-commit: v0.0.291 β v0.0.292](https://github.com/astral-sh/ruff-pre-commit/compare/v0.0.291...v0.0.292)
- [github.com/codespell-project/codespell: v2.2.5 β v2.2.6](https://github.com/codespell-project/codespell/compare/v2.2.5...v2.2.6)
- [github.com/tox-dev/pyproject-fmt: 1.1.0 β 1.2.0](https://github.com/tox-dev/pyproject-fmt/compare/1.1.0...1.2.0)
<!--pre-commit.ci end--> | """Source: https://github.com/jason9075/opencv-mosaic-data-aug"""
import glob
import os
import random
from string import ascii_lowercase, digits
import cv2
import numpy as np
# Parrameters
OUTPUT_SIZE = (720, 1280) # Height, Width
SCALE_RANGE = (0.4, 0.6) # if height or width lower than this scale, drop it.
FILTER_TINY_SCALE = 1 / 100
LABEL_DIR = ""
IMG_DIR = ""
OUTPUT_DIR = ""
NUMBER_IMAGES = 250
def main() -> None:
"""
Get images list and annotations list from input dir.
Update new images and annotations.
Save images and annotations in output dir.
"""
img_paths, annos = get_dataset(LABEL_DIR, IMG_DIR)
for index in range(NUMBER_IMAGES):
idxs = random.sample(range(len(annos)), 4)
new_image, new_annos, path = update_image_and_anno(
img_paths,
annos,
idxs,
OUTPUT_SIZE,
SCALE_RANGE,
filter_scale=FILTER_TINY_SCALE,
)
# Get random string code: '7b7ad245cdff75241935e4dd860f3bad'
letter_code = random_chars(32)
file_name = path.split(os.sep)[-1].rsplit(".", 1)[0]
file_root = f"{OUTPUT_DIR}/{file_name}_MOSAIC_{letter_code}"
cv2.imwrite(f"{file_root}.jpg", new_image, [cv2.IMWRITE_JPEG_QUALITY, 85])
print(f"Succeeded {index+1}/{NUMBER_IMAGES} with {file_name}")
annos_list = []
for anno in new_annos:
width = anno[3] - anno[1]
height = anno[4] - anno[2]
x_center = anno[1] + width / 2
y_center = anno[2] + height / 2
obj = f"{anno[0]} {x_center} {y_center} {width} {height}"
annos_list.append(obj)
with open(f"{file_root}.txt", "w") as outfile:
outfile.write("\n".join(line for line in annos_list))
def get_dataset(label_dir: str, img_dir: str) -> tuple[list, list]:
"""
- label_dir <type: str>: Path to label include annotation of images
- img_dir <type: str>: Path to folder contain images
Return <type: list>: List of images path and labels
"""
img_paths = []
labels = []
for label_file in glob.glob(os.path.join(label_dir, "*.txt")):
label_name = label_file.split(os.sep)[-1].rsplit(".", 1)[0]
with open(label_file) as in_file:
obj_lists = in_file.readlines()
img_path = os.path.join(img_dir, f"{label_name}.jpg")
boxes = []
for obj_list in obj_lists:
obj = obj_list.rstrip("\n").split(" ")
xmin = float(obj[1]) - float(obj[3]) / 2
ymin = float(obj[2]) - float(obj[4]) / 2
xmax = float(obj[1]) + float(obj[3]) / 2
ymax = float(obj[2]) + float(obj[4]) / 2
boxes.append([int(obj[0]), xmin, ymin, xmax, ymax])
if not boxes:
continue
img_paths.append(img_path)
labels.append(boxes)
return img_paths, labels
def update_image_and_anno(
all_img_list: list,
all_annos: list,
idxs: list[int],
output_size: tuple[int, int],
scale_range: tuple[float, float],
filter_scale: float = 0.0,
) -> tuple[list, list, str]:
"""
- all_img_list <type: list>: list of all images
- all_annos <type: list>: list of all annotations of specific image
- idxs <type: list>: index of image in list
- output_size <type: tuple>: size of output image (Height, Width)
- scale_range <type: tuple>: range of scale image
- filter_scale <type: float>: the condition of downscale image and bounding box
Return:
- output_img <type: narray>: image after resize
- new_anno <type: list>: list of new annotation after scale
- path[0] <type: string>: get the name of image file
"""
output_img = np.zeros([output_size[0], output_size[1], 3], dtype=np.uint8)
scale_x = scale_range[0] + random.random() * (scale_range[1] - scale_range[0])
scale_y = scale_range[0] + random.random() * (scale_range[1] - scale_range[0])
divid_point_x = int(scale_x * output_size[1])
divid_point_y = int(scale_y * output_size[0])
new_anno = []
path_list = []
for i, index in enumerate(idxs):
path = all_img_list[index]
path_list.append(path)
img_annos = all_annos[index]
img = cv2.imread(path)
if i == 0: # top-left
img = cv2.resize(img, (divid_point_x, divid_point_y))
output_img[:divid_point_y, :divid_point_x, :] = img
for bbox in img_annos:
xmin = bbox[1] * scale_x
ymin = bbox[2] * scale_y
xmax = bbox[3] * scale_x
ymax = bbox[4] * scale_y
new_anno.append([bbox[0], xmin, ymin, xmax, ymax])
elif i == 1: # top-right
img = cv2.resize(img, (output_size[1] - divid_point_x, divid_point_y))
output_img[:divid_point_y, divid_point_x : output_size[1], :] = img
for bbox in img_annos:
xmin = scale_x + bbox[1] * (1 - scale_x)
ymin = bbox[2] * scale_y
xmax = scale_x + bbox[3] * (1 - scale_x)
ymax = bbox[4] * scale_y
new_anno.append([bbox[0], xmin, ymin, xmax, ymax])
elif i == 2: # bottom-left
img = cv2.resize(img, (divid_point_x, output_size[0] - divid_point_y))
output_img[divid_point_y : output_size[0], :divid_point_x, :] = img
for bbox in img_annos:
xmin = bbox[1] * scale_x
ymin = scale_y + bbox[2] * (1 - scale_y)
xmax = bbox[3] * scale_x
ymax = scale_y + bbox[4] * (1 - scale_y)
new_anno.append([bbox[0], xmin, ymin, xmax, ymax])
else: # bottom-right
img = cv2.resize(
img, (output_size[1] - divid_point_x, output_size[0] - divid_point_y)
)
output_img[
divid_point_y : output_size[0], divid_point_x : output_size[1], :
] = img
for bbox in img_annos:
xmin = scale_x + bbox[1] * (1 - scale_x)
ymin = scale_y + bbox[2] * (1 - scale_y)
xmax = scale_x + bbox[3] * (1 - scale_x)
ymax = scale_y + bbox[4] * (1 - scale_y)
new_anno.append([bbox[0], xmin, ymin, xmax, ymax])
# Remove bounding box small than scale of filter
if filter_scale > 0:
new_anno = [
anno
for anno in new_anno
if filter_scale < (anno[3] - anno[1]) and filter_scale < (anno[4] - anno[2])
]
return output_img, new_anno, path_list[0]
def random_chars(number_char: int) -> str:
"""
Automatic generate random 32 characters.
Get random string code: '7b7ad245cdff75241935e4dd860f3bad'
>>> len(random_chars(32))
32
"""
assert number_char > 1, "The number of character should greater than 1"
letter_code = ascii_lowercase + digits
return "".join(random.choice(letter_code) for _ in range(number_char))
if __name__ == "__main__":
main()
print("DONE β
")
| """Source: https://github.com/jason9075/opencv-mosaic-data-aug"""
import glob
import os
import random
from string import ascii_lowercase, digits
import cv2
import numpy as np
# Parameters
OUTPUT_SIZE = (720, 1280) # Height, Width
SCALE_RANGE = (0.4, 0.6) # if height or width lower than this scale, drop it.
FILTER_TINY_SCALE = 1 / 100
LABEL_DIR = ""
IMG_DIR = ""
OUTPUT_DIR = ""
NUMBER_IMAGES = 250
def main() -> None:
"""
Get images list and annotations list from input dir.
Update new images and annotations.
Save images and annotations in output dir.
"""
img_paths, annos = get_dataset(LABEL_DIR, IMG_DIR)
for index in range(NUMBER_IMAGES):
idxs = random.sample(range(len(annos)), 4)
new_image, new_annos, path = update_image_and_anno(
img_paths,
annos,
idxs,
OUTPUT_SIZE,
SCALE_RANGE,
filter_scale=FILTER_TINY_SCALE,
)
# Get random string code: '7b7ad245cdff75241935e4dd860f3bad'
letter_code = random_chars(32)
file_name = path.split(os.sep)[-1].rsplit(".", 1)[0]
file_root = f"{OUTPUT_DIR}/{file_name}_MOSAIC_{letter_code}"
cv2.imwrite(f"{file_root}.jpg", new_image, [cv2.IMWRITE_JPEG_QUALITY, 85])
print(f"Succeeded {index+1}/{NUMBER_IMAGES} with {file_name}")
annos_list = []
for anno in new_annos:
width = anno[3] - anno[1]
height = anno[4] - anno[2]
x_center = anno[1] + width / 2
y_center = anno[2] + height / 2
obj = f"{anno[0]} {x_center} {y_center} {width} {height}"
annos_list.append(obj)
with open(f"{file_root}.txt", "w") as outfile:
outfile.write("\n".join(line for line in annos_list))
def get_dataset(label_dir: str, img_dir: str) -> tuple[list, list]:
"""
- label_dir <type: str>: Path to label include annotation of images
- img_dir <type: str>: Path to folder contain images
Return <type: list>: List of images path and labels
"""
img_paths = []
labels = []
for label_file in glob.glob(os.path.join(label_dir, "*.txt")):
label_name = label_file.split(os.sep)[-1].rsplit(".", 1)[0]
with open(label_file) as in_file:
obj_lists = in_file.readlines()
img_path = os.path.join(img_dir, f"{label_name}.jpg")
boxes = []
for obj_list in obj_lists:
obj = obj_list.rstrip("\n").split(" ")
xmin = float(obj[1]) - float(obj[3]) / 2
ymin = float(obj[2]) - float(obj[4]) / 2
xmax = float(obj[1]) + float(obj[3]) / 2
ymax = float(obj[2]) + float(obj[4]) / 2
boxes.append([int(obj[0]), xmin, ymin, xmax, ymax])
if not boxes:
continue
img_paths.append(img_path)
labels.append(boxes)
return img_paths, labels
def update_image_and_anno(
all_img_list: list,
all_annos: list,
idxs: list[int],
output_size: tuple[int, int],
scale_range: tuple[float, float],
filter_scale: float = 0.0,
) -> tuple[list, list, str]:
"""
- all_img_list <type: list>: list of all images
- all_annos <type: list>: list of all annotations of specific image
- idxs <type: list>: index of image in list
- output_size <type: tuple>: size of output image (Height, Width)
- scale_range <type: tuple>: range of scale image
- filter_scale <type: float>: the condition of downscale image and bounding box
Return:
- output_img <type: narray>: image after resize
- new_anno <type: list>: list of new annotation after scale
- path[0] <type: string>: get the name of image file
"""
output_img = np.zeros([output_size[0], output_size[1], 3], dtype=np.uint8)
scale_x = scale_range[0] + random.random() * (scale_range[1] - scale_range[0])
scale_y = scale_range[0] + random.random() * (scale_range[1] - scale_range[0])
divid_point_x = int(scale_x * output_size[1])
divid_point_y = int(scale_y * output_size[0])
new_anno = []
path_list = []
for i, index in enumerate(idxs):
path = all_img_list[index]
path_list.append(path)
img_annos = all_annos[index]
img = cv2.imread(path)
if i == 0: # top-left
img = cv2.resize(img, (divid_point_x, divid_point_y))
output_img[:divid_point_y, :divid_point_x, :] = img
for bbox in img_annos:
xmin = bbox[1] * scale_x
ymin = bbox[2] * scale_y
xmax = bbox[3] * scale_x
ymax = bbox[4] * scale_y
new_anno.append([bbox[0], xmin, ymin, xmax, ymax])
elif i == 1: # top-right
img = cv2.resize(img, (output_size[1] - divid_point_x, divid_point_y))
output_img[:divid_point_y, divid_point_x : output_size[1], :] = img
for bbox in img_annos:
xmin = scale_x + bbox[1] * (1 - scale_x)
ymin = bbox[2] * scale_y
xmax = scale_x + bbox[3] * (1 - scale_x)
ymax = bbox[4] * scale_y
new_anno.append([bbox[0], xmin, ymin, xmax, ymax])
elif i == 2: # bottom-left
img = cv2.resize(img, (divid_point_x, output_size[0] - divid_point_y))
output_img[divid_point_y : output_size[0], :divid_point_x, :] = img
for bbox in img_annos:
xmin = bbox[1] * scale_x
ymin = scale_y + bbox[2] * (1 - scale_y)
xmax = bbox[3] * scale_x
ymax = scale_y + bbox[4] * (1 - scale_y)
new_anno.append([bbox[0], xmin, ymin, xmax, ymax])
else: # bottom-right
img = cv2.resize(
img, (output_size[1] - divid_point_x, output_size[0] - divid_point_y)
)
output_img[
divid_point_y : output_size[0], divid_point_x : output_size[1], :
] = img
for bbox in img_annos:
xmin = scale_x + bbox[1] * (1 - scale_x)
ymin = scale_y + bbox[2] * (1 - scale_y)
xmax = scale_x + bbox[3] * (1 - scale_x)
ymax = scale_y + bbox[4] * (1 - scale_y)
new_anno.append([bbox[0], xmin, ymin, xmax, ymax])
# Remove bounding box small than scale of filter
if filter_scale > 0:
new_anno = [
anno
for anno in new_anno
if filter_scale < (anno[3] - anno[1]) and filter_scale < (anno[4] - anno[2])
]
return output_img, new_anno, path_list[0]
def random_chars(number_char: int) -> str:
"""
Automatic generate random 32 characters.
Get random string code: '7b7ad245cdff75241935e4dd860f3bad'
>>> len(random_chars(32))
32
"""
assert number_char > 1, "The number of character should greater than 1"
letter_code = ascii_lowercase + digits
return "".join(random.choice(letter_code) for _ in range(number_char))
if __name__ == "__main__":
main()
print("DONE β
")
| 1 |