List Comprehensions
A list comprehension creates a new list from an iterable with an optional filter, in a single expression:
# Basic syntax: [expression for item in iterable if condition]
# Without comprehension
squares = []
for n in range(10):
squares.append(n ** 2)
# With comprehension
squares = [n ** 2 for n in range(10)]
# With filter
even_squares = [n ** 2 for n in range(10) if n % 2 == 0]
# [0, 4, 16, 36, 64]
# Nested
pairs = [(x, y) for x in [1, 2, 3] for y in [4, 5] if x != y]
Dict and Set Comprehensions
# Dict comprehension
word_lengths = {word: len(word) for word in ["hello", "world", "python"]}
# {'hello': 5, 'world': 5, 'python': 6}
# Invert a dictionary
inverted = {v: k for k, v in original.items()}
# Set comprehension (unique values)
unique_lengths = {len(word) for word in ["hello", "world", "hi"]}
# {2, 5}
Generator Expressions
Use parentheses instead of brackets for a lazy generator that doesn't build the full list in memory — ideal for large datasets:
# List: evaluates all at once, stores in memory
total = sum([x ** 2 for x in range(1_000_000)])
# Generator: evaluates lazily, minimal memory
total = sum(x ** 2 for x in range(1_000_000))
# Pass directly to functions
max_length = max(len(line) for line in open("file.txt"))
has_error = any("ERROR" in line for line in log_lines)
When to Use vs Regular Loops
Use comprehensions for simple transformations and filters. Use regular loops when: the logic is complex (multiple if/else branches), you need side effects, or readability would suffer from nesting more than 2 levels.
Frequently Asked Questions
Are list comprehensions faster than for loops?
Yes, typically 10-30% faster due to optimized bytecode. But for CPU-heavy work, consider NumPy vectorized operations which are 10-100x faster than either approach.
Can I use walrus operator (:=) in comprehensions?
Yes. [y := f(x), y**2, y**3] calls f(x) once and reuses the result — useful when the filter and the value use the same expensive computation.