FEniCS Project Banner

Nonlinear Problems

Now we shall address how to solve nonlinear PDEs in FEniCS. Our sample PDE for implementation is taken as a nonlinear Poisson equation:

-\nabla\cdot\left( q(u)\nabla u\right) = f\thinspace .

The coefficient q(u) makes the equation nonlinear (unless q(u) is constant in u).

To be able to easily verify our implementation, we choose the domain, q(u), f, and the boundary conditions such that we have a simple, exact solution u. Let \Omega be the unit hypercube [0, 1]^d in d dimensions, q(u)=(1+u)^m, f=0, u=0 for x_0=0, u=1 for x_0=1, and \partial u/\partial n=0 at all other boundaries x_i=0 and x_i=1, i=1,\ldots,d-1. The coordinates are now represented by the symbols x_0,\ldots,x_{d-1}. The exact solution is then

u(x_0,\ldots,x_d) = \left((2^{m+1}-1)x_0 + 1\right)^{1/(m+1)} - 1\thinspace .

We refer to the section Parameterizing the Number of Space Dimensions for details on formulating a PDE problem in d space dimensions.

The variational formulation of our model problem reads: Find u \in V such that

(1)F(u; v) = 0 \quad \forall v \in \hat{V},

where

(2)F(u; v) = \int_\Omega q(u)\nabla u\cdot \nabla v \, \mathrm{d}x,

and

\hat{V} &= \{v \in H^1(\Omega) : v = 0 \mbox{ on } x_0=0\mbox{ and }x_0=1\}, \\
 V      &= \{v \in H^1(\Omega) : v = 0 \mbox{ on } x_0=0\mbox{ and } v = 1\mbox{ on }x_0=1\}\thinspace .

The discrete problem arises as usual by restricting V and \hat V to a pair of discrete spaces. As usual, we omit any subscript on discrete spaces and simply say V and \hat V are chosen finite dimensional according to some mesh with some element type. Similarly, we let u be the discrete solution and use u_{\mbox{e}} for the exact solution if it becomes necessary to distinguish between the two.

The discrete nonlinear problem is then wirtten as: find u\in V such that

(3)F(u; v) = 0 \quad \forall v \in \hat{V},

with u = \sum_{j=1}^N U_j \phi_j. Since F is a nonlinear function of u, the variational statement gives rise to a system of nonlinear algebraic equations. [[[ FEniCS can be used in alternative ways for solving a nonlinear PDE problem. We shall in the following subsections go through four solution strategies:

  1. a simple Picard-type iteration,
  2. a Newton method at the algebraic level,
  3. a Newton method at the PDE level, and
  4. an automatic approach where FEniCS attacks the nonlinear variational problem directly.

The “black box” strategy 4 is definitely the simplest one from a programmer’s point of view, but the others give more manual control of the solution process for nonlinear equations (which also has some pedagogical advantages, especially for newcomers to nonlinear finite element problems).

Picard Iteration

Picard iteration is an easy way of handling nonlinear PDEs: we simply use a known, previous solution in the nonlinear terms so that these terms become linear in the unknown u. The strategy is also known as the method of successive substitutions. For our particular problem, we use a known, previous solution in the coefficient q(u). More precisely, given a solution u^k from iteration k, we seek a new (hopefully improved) solution u^{k+1} in iteration k+1 such that u^{k+1} solves the linear problem,

(4)\nabla\cdot \left(q(u^k)\nabla u^{k+1}\right) = 0,\quad k=0,1,\ldots

The iterations require an initial guess u^0. The hope is that u^{k} \rightarrow u as k\rightarrow\infty, and that u^{k+1} is sufficiently close to the exact solution u of the discrete problem after just a few iterations.

We can easily formulate a variational problem for u^{k+1} from the last equation. Equivalently, we can approximate q(u) by q(u^k) in \int_\Omega q(u)\nabla u\cdot \nabla v \, \mathrm{d}x to obtain the same linear variational problem. In both cases, the problem consists of seeking u^{k+1} \in V such that

(5)\tilde F(u^{k+1}; v) = 0 \quad \forall v \in \hat{V},\quad k=0,1,\ldots,

with

(6)\tilde F(u^{k+1}; v) = \int_\Omega q(u^k)\nabla u^{k+1}\cdot \nabla v \, \mathrm{d}x
     \thinspace .

Since this is a linear problem in the unknown u^{k+1}, we can equivalently use the formulation

a(u^{k+1},v) = L(v),

with

a(u,v) &= \int_\Omega q(u^k)\nabla u\cdot \nabla v \, \mathrm{d}x
\\
L(v) &= 0\thinspace .

The iterations can be stopped when \epsilon\equiv ||u^{k+1}-u^k||
< \mbox{tol}, where \mbox{tol} is a small tolerance, say 10^{-5}, or when the number of iterations exceed some critical limit. The latter case will pick up divergence of the method or unacceptable slow convergence.

In the solution algorithm we only need to store u^k and u^{k+1}, called u_k and u in the code below. The algorithm can then be expressed as follows:

def q(u):
    return (1+u)**m

# Define variational problem for Picard iteration
u = TrialFunction(V)
v = TestFunction(V)
u_k = interpolate(Constant(0.0), V)  # previous (known) u
a = inner(q(u_k)*nabla_grad(u), nabla_grad(v))*dx
f = Constant(0.0)
L = f*v*dx

# Picard iterations
u = Function(V)     # new unknown function
eps = 1.0           # error measure ||u-u_k||
tol = 1.0E-5        # tolerance
iter = 0            # iteration counter
maxiter = 25        # max no of iterations allowed
while eps > tol and iter < maxiter:
    iter += 1
    solve(a == L, u, bcs)
    diff = u.vector().array() - u_k.vector().array()
    eps = numpy.linalg.norm(diff, ord=numpy.Inf)
    print 'iter=%d: norm=%g' % (iter, eps)
    u_k.assign(u)   # update for next iteration

We need to define the previous solution in the iterations, u_k, as a finite element function so that u_k can be updated with u at the end of the loop. We may create the initial Function u_k by interpolating an Expression or a Constant to the same vector space as u lives in (V).

In the code above we demonstrate how to use numpy functionality to compute the norm of the difference between the two most recent solutions. Here we apply the maximum norm (\ell_\infty norm) on the difference of the solution vectors (ord=1 and ord=2 give the \ell_1 and \ell_2 vector norms – other norms are possible for numpy arrays, see pydoc numpy.linalg.norm).

The file picard_np.py contains the complete code for this nonlinear Poisson problem. The implementation is d dimensional, with mesh construction and setting of Dirichlet conditions as explained in the section Parameterizing the Number of Space Dimensions. For a 33\times 33 grid with m=2 we need 9 iterations for convergence when the tolerance is 10^{-5}.

A Newton Method at the Algebraic Level

After having discretized our nonlinear PDE problem, we may use Newton’s method to solve the system of nonlinear algebraic equations. From the continuous variational problem, the discrete version results in a system of equations for the unknown parameters U_1,\ldots, U_N

(7)F_i(U_1,\ldots,U_N) \equiv
     \sum_{j=1}^N
     \int_\Omega \left( q\left(\sum_{\ell=1}^NU_\ell\phi_\ell\right)
     \nabla \phi_j U_j\right)\cdot \nabla \hat\phi_i \, \mathrm{d}x = 0,\quad i=1,\ldots,N\thinspace .

Newton’s method for the system F_i(U_1,\ldots,U_j)=0, i=1,\ldots,N can be formulated as

\sum_{j=1}^N
{\partial \over\partial U_j} F_i(U_1^k,\ldots,U_N^k)\delta U_j
&= -F_i(U_1^k,\ldots,U_N^k),\quad i=1,\ldots,N,\\
U_j^{k+1} &= U_j^k + \omega\delta U_j,\quad j=1,\ldots,N,

where \omega\in [0,1] is a relaxation parameter, and k is an iteration index. An initial guess u^0 must be provided to start the algorithm.

The original Newton method has \omega=1, but in problems where it is difficult to obtain convergence, so-called under-relaxation with \omega < 1 may help. It means that one takes a smaller step than what is suggested by Newton’s method.

We need, in a program, to compute the Jacobian matrix \partial F_i/\partial U_j and the right-hand side vector -F_i. Our present problem has F_i given by above. The derivative \partial F_i/\partial U_j becomes

(8)\int\limits_\Omega \left\lbrack
      q'(\sum_{\ell=1}^NU_\ell^k\phi_\ell)\phi_j
     \nabla (\sum_{j=1}^NU_j^k\phi_j)\cdot \nabla \hat\phi_i
     +
     q\left(\sum_{\ell=1}^NU_\ell^k\phi_\ell\right)
     \nabla \phi_j \cdot \nabla \hat\phi_i
     \right\rbrack
      \, \mathrm{d}x\thinspace .

The following results were used to obtain the previous equation:

{\partial u\over\partial U_j} = {\partial\over\partial U_j}
\sum_{j=1}^NU_j\phi_j = \phi_j,\quad {\partial\over\partial U_j}\nabla u = \nabla\phi_j,\quad {\partial\over\partial U_j}q(u) = q'(u)\phi_j\thinspace .

We can reformulate the Jacobian matrix by introducing the short notation u^k = \sum_{j=1}^NU_j^k\phi_j:

{\partial F_i\over\partial U_j} =
\int_\Omega \left\lbrack
q'(u^k)\phi_j
\nabla u^k \cdot \nabla \hat\phi_i
+
q(u^k)
\nabla \phi_j \cdot \nabla \hat\phi_i
\right\rbrack \, \mathrm{d}x\thinspace .

In order to make FEniCS compute this matrix, we need to formulate a corresponding variational problem. Looking at the linear system of equations in Newton’s method,

\sum_{j=1}^N {\partial F_i\over\partial U_j}\delta U_j = -F_i,\quad
i=1,\ldots,N,

we can introduce v as a general test function replacing \hat\phi_i, and we can identify the unknown \delta u = \sum_{j=1}^N\delta U_j\phi_j. From the linear system we can now go “backwards” to construct the corresponding linear discrete weak form to be solved in each Newton iteration:

(9)\int_\Omega \left\lbrack
     q'(u^k)\delta u
     \nabla u^k \cdot \nabla v
     +
     q(u^k)
     \nabla \delta u\cdot \nabla v
     \right\rbrack \, \mathrm{d}x = - \int_\Omega q(u^k)
     \nabla u^k\cdot \nabla v \, \mathrm{d}x\thinspace .

This variational form fits the standard notation a(\delta u,v)=L(v) with

a(\delta u,v) &=
\int_\Omega \left\lbrack
q'(u^k)\delta u
\nabla u^k \cdot \nabla v
+
q(u^k)
\nabla \delta u \cdot \nabla v
\right\rbrack
 \, \mathrm{d}x\\
L(v) &= - \int_\Omega q(u^k)
\nabla u^k\cdot \nabla v \, \mathrm{d}x\thinspace .

Note the important feature in Newton’s method that the previous solution u^k replaces u in the formulas when computing the matrix \partial F_i/\partial U_j and vector F_i for the linear system in each Newton iteration.

We now turn to the implementation. To obtain a good initial guess u^0, we can solve a simplified, linear problem, typically with q(u)=1, which yields the standard Laplace equation \nabla^2 u^0 =0. The recipe for solving this problem appears in the sections Variational Formulation, Implementation (1), and Combining Dirichlet and Neumann Conditions. The code for computing u^0 becomes as follows:

tol = 1E-14
def left_boundary(x, on_boundary):
    return on_boundary and abs(x[0]) < tol

def right_boundary(x, on_boundary):
    return on_boundary and abs(x[0]-1) < tol

Gamma_0 = DirichletBC(V, Constant(0.0), left_boundary)
Gamma_1 = DirichletBC(V, Constant(1.0), right_boundary)
bcs = [Gamma_0, Gamma_1]

# Define variational problem for initial guess (q(u)=1, i.e., m=0)
u = TrialFunction(V)
v = TestFunction(V)
a = inner(nabla_grad(u), nabla_grad(v))*dx
f = Constant(0.0)
L = f*v*dx
A, b = assemble_system(a, L, bcs)
u_k = Function(V)
U_k = u_k.vector()
solve(A, U_k, b)

Here, u_k denotes the solution function for the previous iteration, so that the solution after each Newton iteration is u = u_k + omega*du. Initially, u_k is the initial guess we call u^0 in the mathematics.

The Dirichlet boundary conditions for \delta u, in the problem to be solved in each Newton iteration, are somewhat different than the conditions for u. Assuming that u^k fulfills the Dirichlet conditions for u, \delta u must be zero at the boundaries where the Dirichlet conditions apply, in order for u^{k+1}=u^k + \omega\delta u to fulfill the right boundary values. We therefore define an additional list of Dirichlet boundary conditions objects for \delta u:

Gamma_0_du = DirichletBC(V, Constant(0), left_boundary)
Gamma_1_du = DirichletBC(V, Constant(0), right_boundary)
bcs_du = [Gamma_0_du, Gamma_1_du]

The nonlinear coefficient and its derivative must be defined before coding the weak form of the Newton system:

def q(u):
    return (1+u)**m

def Dq(u):
    return m*(1+u)**(m-1)

du = TrialFunction(V) # u = u_k + omega*du
a = inner(q(u_k)*nabla_grad(du), nabla_grad(v))*dx + \
    inner(Dq(u_k)*du*nabla_grad(u_k), nabla_grad(v))*dx
L = -inner(q(u_k)*nabla_grad(u_k), nabla_grad(v))*dx

The Newton iteration loop is very similar to the Picard iteration loop in the section Picard Iteration:

du = Function(V)
u  = Function(V)  # u = u_k + omega*du
omega = 1.0       # relaxation parameter
eps = 1.0
tol = 1.0E-5
iter = 0
maxiter = 25
while eps > tol and iter < maxiter:
    iter += 1
    A, b = assemble_system(a, L, bcs_du)
    solve(A, du.vector(), b)
    eps = numpy.linalg.norm(du.vector().array(), ord=numpy.Inf)
    print 'Norm:', eps
    u.vector()[:] = u_k.vector() + omega*du.vector()
    u_k.assign(u)

There are other ways of implementing the update of the solution as well:

u.assign(u_k)  # u = u_k
u.vector().axpy(omega, du.vector())

# or
u.vector()[:] += omega*du.vector()

The axpy(a, y) operation adds a scalar a times a Vector y to a Vector object. It is usually a fast operation calling up an optimized BLAS routine for the calculation.

Mesh construction for a $d$-dimensional problem with arbitrary degree of the Lagrange elements can be done as explained in the section Parameterizing the Number of Space Dimensions. The complete program appears in the file alg_newton_np.py.

A Newton Method at the PDE Level

Although Newton’s method in PDE problems is normally formulated at the linear algebra level, i.e., as a solution method for systems of nonlinear algebraic equations, we can also formulate the method at the PDE level. This approach yields a linearization of the PDEs before they are discretized. FEniCS users will probably find this technique simpler to apply than the more standard method in the section A Newton Method at the Algebraic Level.

Given an approximation to the solution field, u^k, we seek a perturbation \delta u so that

u^{k+1} = u^k + \delta u

fulfills the nonlinear PDE. However, the problem for \delta u is still nonlinear and nothing is gained. The idea is therefore to assume that \delta u is sufficiently small so that we can linearize the problem with respect to \delta u. Inserting u^{k+1} in the PDE, linearizing the q term as

q(u^{k+1}) = q(u^k) + q'(u^k)\delta u + {\cal O}((\delta u)^2)
\approx q(u^k) + q'(u^k)\delta u,

and dropping nonlinear terms in \delta u, we get

\nabla\cdot\left( q(u^k)\nabla u^k\right) +
\nabla\cdot\left( q(u^k)\nabla\delta u\right) +
\nabla\cdot\left( q'(u^k)\delta u\nabla u^k\right) = 0\thinspace .

We may collect the terms with the unknown \delta u on the left-hand side,

\nabla\cdot\left( q(u^k)\nabla\delta u\right) +
\nabla\cdot\left( q'(u^k)\delta u\nabla u^k\right) =
-\nabla\cdot\left( q(u^k)\nabla u^k\right),

The weak form of this PDE is derived by multiplying by a test function v and integrating over \Omega, integrating as usual the second-order derivatives by parts:

\int_\Omega \left(
q(u^k)\nabla\delta u\cdot \nabla v
+ q'(u^k)\delta u\nabla u^k\cdot \nabla v\right) \, \mathrm{d}x
= -\int_\Omega q(u^k)\nabla u^k\cdot \nabla v \, \mathrm{d}x\thinspace .

The variational problem reads: find \delta u\in V such that a(\delta u,v) = L(v) for all v\in \hat V, where

a(\delta u,v) &=
\int_\Omega \left(
q(u^k)\nabla\delta u\cdot \nabla v
+ q'(u^k)\delta u\nabla u^k\cdot \nabla v\right) \, \mathrm{d}x,
\\
L(v) &= -
\int_\Omega q(u^k)\nabla u^k\cdot \nabla v \, \mathrm{d}x\thinspace .

The function spaces V and \hat V, being continuous or discrete, are as in the linear Poisson problem from the section Variational Formulation.

We must provide some initial guess, e.g., the solution of the PDE with q(u)=1. The corresponding weak form a_0(u^0,v)=L_0(v) has

a_0(u,v)=\int_\Omega\nabla u\cdot \nabla v \, \mathrm{d}x,\quad L_0(v)=0\thinspace .

Thereafter, we enter a loop and solve a(\delta u,v)=L(v) for \delta u and compute a new approximation u^{k+1} = u^k + \delta u. Note that \delta u is a correction, so if u^0 satisfies the prescribed Dirichlet conditions on some part \Gamma_D of the boundary, we must demand \delta u=0 on \Gamma_D.

Looking at the equations just derived, we see that the variational form is the same as for the Newton method at the algebraic level in the section A Newton Method at the Algebraic Level. Since Newton’s method at the algebraic level required some “backward” construction of the underlying weak forms, FEniCS users may prefer Newton’s method at the PDE level, which this author finds more straightforward, although not so commonly documented in the literature on numerical methods for PDEs. There is seemingly no need for differentiations to derive a Jacobian matrix, but a mathematically equivalent derivation is done when nonlinear terms are linearized using the first two Taylor series terms and when products in the perturbation \delta u are neglected.

The implementation is identical to the one in the section A Newton Method at the Algebraic Level and is found in the file pde_newton_np.py. The reader is encouraged to go through this code to be convinced that the present method actually ends up with the same program as needed for the Newton method at the linear algebra level in the section A Newton Method at the Algebraic Level.

Solving the Nonlinear Variational Problem Directly

The previous hand-calculations and manual implementation of Picard or Newton methods can be automated by tools in FEniCS. In a nutshell, one can just write

problem = NonlinearVariationalProblem(F, u, bcs, J)
solver  = NonlinearVariationalSolver(problem)
solver.solve()

where F corresponds to the nonlinear form F(u;v), u is the unknown Function object, bcs represents the essential boundary conditions (in general a list of DirichletBC objects), and J is a variational form for the Jacobian of F.

Let us explain in detail how to use the built-in tools for nonlinear variational problems and their solution. The appropriate F form is straightforwardly defined as follows, assuming q(u) is coded as a Python function:

u_ = Function(V)     # most recently computed solution
v  = TestFunction(V)
F  = inner(q(u_)*nabla_grad(u_), nabla_grad(v))*dx

Note here that u_ is a Function (not a TrialFunction). An alternative and perhaps more intuitive formula for F is to define F(u;v) directly in terms of a trial function for u and a test function for v, and then create the proper F by

u  = TrialFunction(V)
v  = TestFunction(V)
F  = inner(q(u)*nabla_grad(u), nabla_grad(v))*dx
u_ = Function(V)     # the most recently computed solution
F  = action(F, u_)

The latter statement is equivalent to F(u=u_{-}; v), where u_{-} is an existing finite element function representing the most recently computed approximation to the solution. (Note that u^k and u^{k+1} in the previous notation correspond to u_{-} and u in the present notation. We have changed notation to better align the mathematics with the associated UFL code.)

The derivative J (J) of F (F) is formally the Gateaux derivative DF(u^k; \delta u, v) of F(u;v) at u=u_{-} in the direction of \delta u. Technically, this Gateaux derivative is derived by computing

(10)\lim_{\epsilon\rightarrow 0}{d\over d\epsilon} F_i(u_{-} + \epsilon\delta u; v)
     \thinspace .

The \delta u is now the trial function and u_{-} is the previous approximation to the solution u. We start with

{d\over d\epsilon}\int_\Omega \nabla v\cdot\left( q(u_{-} + \epsilon\delta u)
\nabla (u_{-} + \epsilon\delta u)\right) \, \mathrm{d}x

and obtain

\int_\Omega \nabla v\cdot\left\lbrack
q'(u_{-} + \epsilon\delta u)\delta u
\nabla (u_{-} + \epsilon\delta u)
+
q(u_{-} + \epsilon\delta u)
\nabla \delta u
\right\rbrack \, \mathrm{d}x,

which leads to

\int_\Omega \nabla v\cdot\left\lbrack
q'(u_{-})\delta u
\nabla (u_{-})
+
q(u_{-})
\nabla \delta u
\right\rbrack \, \mathrm{d}x,

as \epsilon\rightarrow 0. This last expression is the Gateaux derivative of F. We may use J or a(\delta u, v) for this derivative, the latter having the advantage that we easily recognize the expression as a bilinear form. However, in the forthcoming code examples J is used as variable name for the Jacobian.

The specification of J goes as follows if du is the TrialFunction:

du = TrialFunction(V)
v  = TestFunction(V)
u_ = Function(V)      # the most recently computed solution
F  = inner(q(u_)*nabla_grad(u_), nabla_grad(v))*dx

J = inner(q(u_)*nabla_grad(du), nabla_grad(v))*dx + \
    inner(Dq(u_)*du*nabla_grad(u_), nabla_grad(v))*dx

The alternative specification of F, with u as TrialFunction, leads to

u  = TrialFunction(V)
v  = TestFunction(V)
u_ = Function(V)      # the most recently computed solution
F  = inner(q(u)*nabla_grad(u), nabla_grad(v))*dx
F  = action(F, u_)

J = inner(q(u_)*nabla_grad(u), nabla_grad(v))*dx + \
    inner(Dq(u_)*u*nabla_grad(u_), nabla_grad(v))*dx

The UFL language, used to specify weak forms, supports differentiation of forms. This feature facilitates automatic symbolic computation of the Jacobian J by calling the function derivative with F, the most recently computed solution (Function), and the unknown (TrialFunction) as parameters:

du = TrialFunction(V)
v  = TestFunction(V)
u_ = Function(V)      # the most recently computed solution
F  = inner(q(u_)*nabla_grad(u_), nabla_grad(v))*dx

J  = derivative(F, u_, du)  # Gateaux derivative in dir. of du

or

u  = TrialFunction(V)
v  = TestFunction(V)
u_ = Function(V)      # the most recently computed solution
F  = inner(q(u)*nabla_grad(u), nabla_grad(v))*dx
F  = action(F, u_)

J  = derivative(F, u_, u)   # Gateaux derivative in dir. of u

The derivative function is obviously very convenient in problems where differentiating F by hand implies lengthy calculations.

The preferred implementation of F and J, depending on whether du or u is the TrialFunction object, is a matter of personal taste. Derivation of the Gateaux derivative by hand, as shown above, is most naturally matched by an implementation where du is the TrialFunction, while use of automatic symbolic differentiation with the aid of the derivative function is most naturally matched by an implementation where u is the TrialFunction. We have implemented both approaches in two files: vp1_np.py with u as TrialFunction, and vp2_np.py with du as TrialFunction. The directory stationary/nonlinear_poisson contains both files. The first command-line argument determines if the Jacobian is to be automatically derived or computed from the hand-derived formula.

The following code defines the nonlinear variational problem and an associated solver based on Newton’s method. We here demonstrate how key parameters in Newton’s method can be set, as well as the choice of solver and preconditioner, and associated parameters, for the linear system occurring in the Newton iteration.

problem = NonlinearVariationalProblem(F, u_, bcs, J)
solver  = NonlinearVariationalSolver(problem)

prm = solver.parameters
prm['newton_solver']['absolute_tolerance'] = 1E-8
prm['newton_solver']['relative_tolerance'] = 1E-7
prm['newton_solver']['maximum_iterations'] = 25
prm['newton_solver']['relaxation_parameter'] = 1.0
if iterative_solver:
    prm['linear_solver'] = 'gmres'
    prm['preconditioner'] = 'ilu'
    prm['krylov_solver']['absolute_tolerance'] = 1E-9
    prm['krylov_solver']['relative_tolerance'] = 1E-7
    prm['krylov_solver']['maximum_iterations'] = 1000
    prm['krylov_solver']['gmres']['restart'] = 40
    prm['krylov_solver']['preconditioner']['ilu']['fill_level'] = 0
set_log_level(PROGRESS)

solver.solve()

A list of available parameters and their default values can as usual be printed by calling info(prm, True). The u_ we feed to the nonlinear variational problem object is filled with the solution by the call solver.solve().