%pylab inline
The NumPy function "meshgrid" can be used to:
def distance(x, y):
return sqrt(x**2 + y**2)
distance(3, 4)
x = arange(4)
y = arange(3)
print x
print y
X, Y = meshgrid(x, y)
print X
print Y
print distance(X, Y)
First create the real and imaginary axes
x = linspace(-1, 1, 5)
y = linspace(-1j, 1j, 5)
print x
print y
Then use meshgrid to create 2d arrays of the real and imaginary parts of each point
X, Y = meshgrid(x, y)
print X
print Y
Finally, add the real and imaginary parts to create a 2D array of complex numbers
Z = X + Y
print Z
First create a grid of points in the complex plane
npts = 100
x = linspace(-1, 1, npts)
y = linspace(-1j, 1j, npts)
X, Y = meshgrid(x, y)
Z = X + Y
Iterate using Newton's Method on the function $f(z) = z^3 - 1$
$\Rightarrow$ $z_{i + 1} = z_i - \dfrac{f(z_i)}{f'(z_i)} = z_i - \dfrac{z_i^3 - 1}{3z_i^2}$
niters = 20
for i in range(niters):
Z -= (Z**3 - 1)/(3*Z**2)
Create variables for the roots of $f(z)$
r1 = complex(1, 0)
r2 = complex(-.5, sqrt(3)/2.)
r3 = complex(-.5, -sqrt(3)/2.)
Create a boolean array testing convergence to each root.
tol = 1e-6
root1s = abs(Z - r1) < tol
root2s = abs(Z - r2) < tol
root3s = abs(Z - r3) < tol
Finally, create an image array and set the red, green and blue components depending on which root each initial $z$ value has converged to.
img = zeros((npts, npts, 3))
img[:,:,0] = root1s
img[:,:,1] = root2s
img[:,:,2] = root3s
imshow(img[::-1])
Alternatively, all three components can be set at the same time using Boolean indexing
img = zeros((npts, npts, 3))
img[root1s] = [1, 1, 0]
img[root2s] = [1, 0, 1]
img[root3s] = [0, 1, 1]
imshow(img[::-1])
Say we want to numerically estimate the mean value of a function by $\bar{f} = \dfrac{1}{n} \sum_{i=1}^n f(x_i)$
A "uniform grid" of equally spaced points may not be the best choice for $\{x_1, x_2, \ldots, x_n\}$
def testf(x):
return 2 + 2*sin(50*x)
x = linspace(0, 1, 400)
sample = linspace(0, 1, 9)
plot(x, testf(x))
plot(sample, testf(sample), 'ro')
Choosing "random" numbers on the interval can help minimize errors and avoid bias.
For these kinds of applications, we want such numbers to have the properties of being:
1) Uniformly distributed. Statistically, the numbers are evenly spread over the entire interval.
2) Independent of each other. There should be a lack of "pairwise" and higher-order correlation between numbers in the sequence.
We can test for these properties experimentally by:
1) Plotting a "histogram" of the sequence of numbers - "A graphical representation of the distribution of numerical data".
2) Plotting $x_{i + 1}$ against $x_i$ for a large nunber of points. If there is no pairwise correlation, then the points will be scattered throughout the square.
The histogram shows a non-uniform distribution, with distinctly higher densities of points at the ends of the interval [0, 1].
x = .4
b = 4.
npts = 100000
results = empty(npts)
for i in xrange(npts):
results[i] = x
x = b*(1-x)*x
h = hist(results, bins=50)
Even worse, the "Mayfly Model" fails the "no pairwise correlation" test badly, since every pair ($x_i$, $x_{i+1}$) lies on the graph of $g(x) = 4(1 - x)x$.
npts = 1000
results = empty(npts)
for i in xrange(npts):
results[i] = x
x = b*(1-x)*x
plot(results[:-1], results[1:], 'b.', ms=1, alpha=.7)
We want a function that effectively "fills in" the square.
Consider the graph of $g(x) = ax \mod 1$
With a = 2, we have the following graph.
a = 2
x = linspace(0, 1, 400)
plot(x, a*x % 1, 'b.')
Increasing a, the graph gets steeper and steeper.
a = 20
x = linspace(0, 1, 5000)
plot(x, a*x % 1, 'b.', ms=2)