Thanks to visit codestin.com
Credit goes to github.com

Skip to content

Commit e5ec0a9

Browse files
committed
Days 14 and 15
1 parent 18160a7 commit e5ec0a9

File tree

2 files changed

+348
-0
lines changed

2 files changed

+348
-0
lines changed

2023/Day 14.ipynb

Lines changed: 205 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,205 @@
1+
{
2+
"cells": [
3+
{
4+
"cell_type": "markdown",
5+
"metadata": {},
6+
"source": [
7+
"# A weighty task\n",
8+
"\n",
9+
"- https://adventofcode.com/2023/day/14\n",
10+
"\n",
11+
"For this task, you don't actually have to roll stones around. All you need to do is split each column on (consecutive sequences of) cubed rocks, and count the number of rolling rocks in each section. You can calculate their weight by taking the offset of the section into consideration.\n",
12+
"\n",
13+
"If you started at the north end, the heaviest rock is weight 10, and if there are 4 rolling rocks you know that they all weigh more than $10 - 4 = 6$. Adding up consecutive numbers between to points (10 and 6 here) can be done by taking the [triangle number](https://en.wikipedia.org/wiki/Triangular_number) of the two values and subtracting them. Triangle numbers are trivial to compute, the formula is $\\frac{n(n+1)}{2}$.\n",
14+
"\n",
15+
"To split by consecutive cube-shaped rocks, split the string using a regular expression with a group in it; the [`re.split()` function](https://docs.python.org/3/library/re.html#re.split) (or the equivalent method on a compiled regular expression) then not only returns the strings between the pattern, but also the part that's matched by the group. Splitting on `(#+)` produces alternating strings with stationary cube-shaped boulders, and sections with rolling boulders and empty space. If the first character of a line were to be the north end of the map, then a boulder at that position would weigh `len(line)`, etc. As you process all the groups from a split, keep track of the maximum weight for that section by subtracting the length of each group as you iterate.\n",
16+
"\n",
17+
"We do need to then re-orient our map to put the north-south line along text lines. We can use a simple Python transposition trick for this; if you pass a list of lines to the `zip()` function, as separate arguments, then this will yield tuples with the characters of each column. It's as if you rotated the input text by 90 degrees to the right then flipped the resulting lines.\n"
18+
]
19+
},
20+
{
21+
"cell_type": "code",
22+
"execution_count": 1,
23+
"metadata": {},
24+
"outputs": [],
25+
"source": [
26+
"import re\n",
27+
"import typing as t\n",
28+
"\n",
29+
"\n",
30+
"def _tn(n: int) -> int:\n",
31+
" \"\"\"triangle number\"\"\"\n",
32+
" return n * (n + 1) // 2\n",
33+
"\n",
34+
"\n",
35+
"def _by_column(map: str) -> list[str]:\n",
36+
" return [\"\".join(col) for col in zip(*map.splitlines())]\n",
37+
"\n",
38+
"\n",
39+
"_cube_shaped = re.compile(r\"(#+)\")\n",
40+
"\n",
41+
"\n",
42+
"def total_load(map: str) -> int:\n",
43+
" total = 0\n",
44+
" for col in _by_column(map):\n",
45+
" weight = len(col)\n",
46+
" for group in _cube_shaped.split(col):\n",
47+
" if rolling := group.count(\"O\"):\n",
48+
" total += _tn(weight) - _tn(weight - rolling)\n",
49+
" weight -= len(group)\n",
50+
" return total\n",
51+
"\n",
52+
"\n",
53+
"test_platform = \"\"\"\\\n",
54+
"O....#....\n",
55+
"O.OO#....#\n",
56+
".....##...\n",
57+
"OO.#O....O\n",
58+
".O.....O#.\n",
59+
"O.#..O.#.#\n",
60+
"..O..#O..O\n",
61+
".......O..\n",
62+
"#....###..\n",
63+
"#OO..#....\n",
64+
"\"\"\"\n",
65+
"\n",
66+
"\n",
67+
"assert total_load(test_platform) == 136"
68+
]
69+
},
70+
{
71+
"cell_type": "code",
72+
"execution_count": 2,
73+
"metadata": {},
74+
"outputs": [
75+
{
76+
"name": "stdout",
77+
"output_type": "stream",
78+
"text": [
79+
"Part 1: 109098\n"
80+
]
81+
}
82+
],
83+
"source": [
84+
"import aocd\n",
85+
"\n",
86+
"platform = aocd.get_data(day=14, year=2023)\n",
87+
"print(\"Part 1:\", total_load(platform))"
88+
]
89+
},
90+
{
91+
"cell_type": "markdown",
92+
"metadata": {},
93+
"source": [
94+
"# Cycling it up\n",
95+
"\n",
96+
"Part two can be solved with more string manipulation. I did have to make two changes to my part 1 implementation for part 2:\n",
97+
"\n",
98+
"- We can't just use transpositions now, we need proper rotations. Simply reverse each line after transposing from columns to rows.\n",
99+
"- Calculating the weights needs to be a separate step now. I switched to just counting rolling rocks per line, and I reversed the map lines so the last line is processed first, etc. That allows us to use the [`enumerate()` function](https://docs.python.org/3/library/functions.html#enumerate) to provide us with the right weight value for each rolling rock.\n",
100+
"\n",
101+
"Experienced participants will of course have recognized that we don't really want to cycle the map 1 billion times. Past AOC puzzles have taught us to look for repeating patterns: keep track of what the map looked like at each step and if you encounter the same map later on, you know how many steps have passed for this loop, and you can fast-forward to the end.\n"
102+
]
103+
},
104+
{
105+
"cell_type": "code",
106+
"execution_count": 3,
107+
"metadata": {},
108+
"outputs": [
109+
{
110+
"name": "stdout",
111+
"output_type": "stream",
112+
"text": [
113+
"64\n"
114+
]
115+
}
116+
],
117+
"source": [
118+
"def _rotate(map: str) -> str:\n",
119+
" return \"\\n\".join(\"\".join(col[::-1]) for col in zip(*map.splitlines()))\n",
120+
"\n",
121+
"\n",
122+
"def _roll(map: str) -> str:\n",
123+
" # move every rolling rock to the end of the group\n",
124+
" lines: list[str] = [\n",
125+
" \"\".join(\n",
126+
" [\n",
127+
" g.replace(\"O\", \"\") + \"O\" * g.count(\"O\")\n",
128+
" for g in t.cast(list[str], _cube_shaped.split(line))\n",
129+
" ]\n",
130+
" )\n",
131+
" for line in map.splitlines()\n",
132+
" ]\n",
133+
" return \"\\n\".join(lines)\n",
134+
"\n",
135+
"\n",
136+
"def _cycle(map: str) -> str:\n",
137+
" for _ in range(4):\n",
138+
" map = _roll(_rotate(map))\n",
139+
" return map\n",
140+
"\n",
141+
"\n",
142+
"def cycle(map: str, steps: int) -> str:\n",
143+
" states: dict[str, int] = {}\n",
144+
" step = 0\n",
145+
" while step < steps:\n",
146+
" map = _cycle(map)\n",
147+
" if (prev := states.get(map)) is not None:\n",
148+
" # cycle found, we can fast-forward now\n",
149+
" length = step - prev\n",
150+
" step += (steps - step) // length * length\n",
151+
" else:\n",
152+
" states[map] = step\n",
153+
" step += 1\n",
154+
" return map\n",
155+
"\n",
156+
"\n",
157+
"def total_load(map: str) -> int:\n",
158+
" return sum(\n",
159+
" row.count(\"O\") * i for i, row in enumerate(reversed(map.splitlines()), 1)\n",
160+
" )\n",
161+
"\n",
162+
"\n",
163+
"print(total_load(cycle(test_platform, 1_000_000_000)))"
164+
]
165+
},
166+
{
167+
"cell_type": "code",
168+
"execution_count": 4,
169+
"metadata": {},
170+
"outputs": [
171+
{
172+
"name": "stdout",
173+
"output_type": "stream",
174+
"text": [
175+
"100064\n"
176+
]
177+
}
178+
],
179+
"source": [
180+
"print(total_load(cycle(platform, 1_000_000_000)))"
181+
]
182+
}
183+
],
184+
"metadata": {
185+
"kernelspec": {
186+
"display_name": "Python 3 (ipykernel)",
187+
"language": "python",
188+
"name": "python3"
189+
},
190+
"language_info": {
191+
"codemirror_mode": {
192+
"name": "ipython",
193+
"version": 3
194+
},
195+
"file_extension": ".py",
196+
"mimetype": "text/x-python",
197+
"name": "python",
198+
"nbconvert_exporter": "python",
199+
"pygments_lexer": "ipython3",
200+
"version": "3.12.1"
201+
}
202+
},
203+
"nbformat": 4,
204+
"nbformat_minor": 4
205+
}

2023/Day 15.ipynb

Lines changed: 143 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,143 @@
1+
{
2+
"cells": [
3+
{
4+
"cell_type": "markdown",
5+
"metadata": {},
6+
"source": [
7+
"# Hashing it all out\n",
8+
"\n",
9+
"- https://adventofcode.com/2023/day/15\n",
10+
"\n",
11+
"We are creating a simple [hash function](https://en.wikipedia.org/wiki/Hash_function). Because there are only 256 possible outputs for the function, and there are only [128 possible ASCII codepoints](https://en.wikipedia.org/wiki/ASCII), I've choosen to create a small table of 128 + 256 = 384 possible outputs for the new value of the hash after adding the next ASCII number. The table can then replace the multication and remainder operations.\n",
12+
"\n",
13+
"To map the characters of the input string to their ASCII codepoints, just encode the string to an ASCII `bytes` value; iteration over a Python `bytes` object gives you the integer byte values, which are the ASCII codepoints we wanted.\n",
14+
"\n",
15+
"As a final nod to performance, I'm storing the lookup table as a private keyword argument to the function, so that Python can look it up as a local variable. This is faster than looking up global variables. Perhaps part two will up the ante, requiring hashing huge strings, and then every nanosecond counts. :D As it stands, on my laptop the function now takes about 145 nanoseconds to hash the word `\"HASH\"`. Not bad for Python code!\n"
16+
]
17+
},
18+
{
19+
"cell_type": "code",
20+
"execution_count": 1,
21+
"metadata": {},
22+
"outputs": [],
23+
"source": [
24+
"import typing as t\n",
25+
"\n",
26+
"_TABLE: t.Final[tuple[int, ...]] = tuple((i * 17) % 256 for i in range(256 + 128))\n",
27+
"\n",
28+
"\n",
29+
"def holiday_ascii_string_helper(s: str, _t: tuple[int, ...] = _TABLE) -> int:\n",
30+
" hash = 0\n",
31+
" for c in s.encode(\"ascii\"):\n",
32+
" hash = _t[hash + c]\n",
33+
" return hash\n",
34+
"\n",
35+
"\n",
36+
"assert holiday_ascii_string_helper(\"HASH\") == 52\n",
37+
"test_steps = str(\"rn=1,cm-,qp=3,cm=2,qp-,pc=4,ot=9,ab=5,pc-,pc=6,ot=7\").split(\",\")\n",
38+
"assert sum(map(holiday_ascii_string_helper, test_steps)) == 1320"
39+
]
40+
},
41+
{
42+
"cell_type": "code",
43+
"execution_count": 2,
44+
"metadata": {},
45+
"outputs": [
46+
{
47+
"name": "stdout",
48+
"output_type": "stream",
49+
"text": [
50+
"Part 1: 511343\n"
51+
]
52+
}
53+
],
54+
"source": [
55+
"import aocd\n",
56+
"\n",
57+
"steps = aocd.get_data(day=15, year=2023).strip().split(\",\")\n",
58+
"print(\"Part 1:\", sum(map(holiday_ascii_string_helper, steps)))"
59+
]
60+
},
61+
{
62+
"cell_type": "markdown",
63+
"metadata": {},
64+
"source": [
65+
"# It's a dictionary!\n",
66+
"\n",
67+
"For part two, we are going to implement a real [hashmap](https://en.wikipedia.org/wiki/Hashmap); or, as Python calls it, the [_dictionary type_](https://docs.python.org/3/library/stdtypes.html#dict), aka `dict`. In a hashmap, the boxes are commonly referred to as 'buckets'; a hashing algorithm selects what bucket to store key-value pairs into (or just to find the value for a given key). Because multiple input values can hash to the same bucket, you need a way to handle _collisions_, a way to store multiple values in the same bucket.\n",
68+
"\n",
69+
"This is what happens inside the boxes here, it's a [collision resolution scheme](https://en.wikipedia.org/wiki/Hash_table#Collision_resolution), and here we are using _separate chaining_ to put multiple values into the same bucket. The Python dictionary type uses a different resolution scheme, it uses [_open addressing_](https://en.wikipedia.org/wiki/Hash_table#Open_addressing), but the idea is the same. When you look up a key to find the value, you use the hash function to find the corresponding bucket and then use equality tests for each key there in turn until you have the correct one.\n",
70+
"\n",
71+
"Since Python 3.6 Python's dictionaries [preserve insertion order](https://stackoverflow.com/a/39537308/100297), and since Python 3.7, this fact was enshrined in the Python language specification. This means that when you use `dictonary[key] = value` to set a value in the dictionary, it'll add that key to the 'end' of the ordering, _unless_ the key is already in the table, at which point it'll just keep the same position. Looping over the keys or values of the dictionary then produces those keys or values in the order they were inserted. That's very handy here, we just make our boxes Python dictionaries, and so avoid having to test every key in each box to see if there already is a given label in that box.\n"
72+
]
73+
},
74+
{
75+
"cell_type": "code",
76+
"execution_count": 3,
77+
"metadata": {},
78+
"outputs": [],
79+
"source": [
80+
"import re\n",
81+
"\n",
82+
"_instr = re.compile(r\"([-=])\")\n",
83+
"\n",
84+
"\n",
85+
"def hashmap(steps: list[str]) -> int:\n",
86+
" boxes: tuple[dict[str, int], ...] = tuple({} for _ in range(256))\n",
87+
" for step in steps:\n",
88+
" label, instr, lens = _instr.split(step)\n",
89+
" box = boxes[holiday_ascii_string_helper(label)]\n",
90+
" if instr == \"-\":\n",
91+
" box.pop(label, None)\n",
92+
" else:\n",
93+
" box[label] = int(lens)\n",
94+
" return sum(\n",
95+
" b * l * lens\n",
96+
" for b, box in enumerate(boxes, 1)\n",
97+
" for l, lens in enumerate(box.values(), 1)\n",
98+
" )\n",
99+
"\n",
100+
"\n",
101+
"assert hashmap(test_steps) == 145"
102+
]
103+
},
104+
{
105+
"cell_type": "code",
106+
"execution_count": 4,
107+
"metadata": {},
108+
"outputs": [
109+
{
110+
"name": "stdout",
111+
"output_type": "stream",
112+
"text": [
113+
"Part 2: 294474\n"
114+
]
115+
}
116+
],
117+
"source": [
118+
"print(\"Part 2:\", hashmap(steps))"
119+
]
120+
}
121+
],
122+
"metadata": {
123+
"kernelspec": {
124+
"display_name": "Python 3 (ipykernel)",
125+
"language": "python",
126+
"name": "python3"
127+
},
128+
"language_info": {
129+
"codemirror_mode": {
130+
"name": "ipython",
131+
"version": 3
132+
},
133+
"file_extension": ".py",
134+
"mimetype": "text/x-python",
135+
"name": "python",
136+
"nbconvert_exporter": "python",
137+
"pygments_lexer": "ipython3",
138+
"version": "3.12.1"
139+
}
140+
},
141+
"nbformat": 4,
142+
"nbformat_minor": 4
143+
}

0 commit comments

Comments
 (0)