Artículos

5.1: Ecuaciones lineales homogéneas - Matemáticas


Se dice que una ecuación diferencial de segundo orden es lineal si se puede escribir como

[ label {eq: 5.1.1} y '' + p (x) y '+ q (x) y = f (x). ]

Llamamos a la función (f ) de la derecha a función de fuerza, ya que en aplicaciones físicas a menudo se relaciona con una fuerza que actúa sobre algún sistema modelado por la ecuación diferencial. Decimos que la ecuación ref {eq: 5.1.1} es homogéneo si (f equiv0 ) o no homogéneo si (f no equiv0 ). Dado que estas definiciones son como las definiciones correspondientes en la Sección 2.1 para la ecuación lineal de primer orden

[ label {eq: 5.1.2} y '+ p (x) y = f (x), ]

es natural esperar similitudes entre los métodos para resolver la Ecuación ref {eq: 5.1.1} y la Ecuación ref {eq: 5.1.2}. Sin embargo, resolver la Ecuación ref {eq: 5.1.1} es más difícil que resolver la Ecuación ref {eq: 5.1.2}. Por ejemplo, mientras que el teorema ( PageIndex {1} ) da una fórmula para la solución general de la ecuación ref {eq: 5.1.2} en el caso donde (f equiv0 ) y el teorema ( PageIndex { 2} ) da una fórmula para el caso donde (f not equiv0 ), no hay fórmulas para la solución general de la Ecuación ref {eq: 5.1.1} en ningún caso. Por lo tanto, debemos contentarnos con resolver ecuaciones lineales de segundo orden de formas especiales.

En la Sección 2.1 consideramos la ecuación homogénea (y '+ p (x) y = 0 ) primero, y luego usamos una solución no trivial de esta ecuación para encontrar la solución general de la ecuación no homogénea (y' + p (x ) y = f (x) ). Aunque la progresión del caso homogéneo al no homogéneo no es tan simple para la ecuación lineal de segundo orden, todavía es necesario resolver la ecuación homogénea

[ label {eq: 5.1.3} y '' + p (x) y '+ q (x) y = 0 ]

para resolver la ecuación no homogénea Ecuación ref {eq: 5.1.1}. Esta sección está dedicada a la Ecuación ref {eq: 5.1.3}.

El siguiente teorema proporciona condiciones suficientes para la existencia y unicidad de las soluciones de los problemas de valor inicial para la ecuación ref {eq: 5.1.3}. Omitimos la prueba.

Teorema ( PageIndex {1} )

Suponga que (p ) y (q ) son continuas en un intervalo abierto ((a, b), ) sea (x_0 ) cualquier punto en ((a, b), ) y sea (k_0 ) y (k_1 ) son números reales arbitrarios (. ) Entonces el problema de valor inicial

[y '' + p (x) y '+ q (x) y = 0, y (x_0) = k_0, y' (x_0) = k_1 nonumber ]

tiene una solución única en ((a, b). )

Como (y equiv0 ) es obviamente una solución de la ecuación ref {eq: 5.1.3} lo llamamos trivial solución. Cualquier otra solución es no trivial. Bajo los supuestos del teorema ( PageIndex {1} ), la única solución del problema del valor inicial

[y '' + p (x) y '+ q (x) y = 0, y (x_0) = 0, y' (x_0) = 0 nonumber ]

en ((a, b) ) es la solución trivial (Ejercicio 5.1.24).

Los siguientes tres ejemplos ilustran conceptos que desarrollaremos más adelante en esta sección. No debería preocuparse por cómo encontrar las soluciones dadas de las ecuaciones en estos ejemplos. Esto se explicará en secciones posteriores.

Ejemplo ( PageIndex {1} )

Los coeficientes de (y ') y (y ) en

[ label {eq: 5.1.4} y '' - y = 0 ]

son las funciones constantes (p equiv0 ) y (q equiv-1 ), que son continuas en ((- infty, infty) ). Por lo tanto, el teorema ( PageIndex {1} ) implica que cada problema de valor inicial para la ecuación ref {eq: 5.1.4} tiene una solución única en ((- infty, infty) ).

  1. Verifique que (y_1 = e ^ x ) y (y_2 = e ^ {- x} ) sean soluciones de la Ecuación ref {eq: 5.1.4} en ((- infty, infty) ) .
  2. Verifique que si (c_1 ) y (c_2 ) son constantes arbitrarias, (y = c_1e ^ x + c_2e ^ {- x} ) es una solución de la Ecuación ref {eq: 5.1.4} en ((- infty, infty) ).
  3. Resuelve el problema del valor inicial [ label {eq: 5.1.5} y '' - y = 0, quad y (0) = 1, quad y '(0) = 3. ]

Solución:

una. Si (y_1 = e ^ x ) entonces (y_1 '= e ^ x ) y (y_1' '= e ^ x = y_1 ), entonces (y_1' '- y_1 = 0 ). Si (y_2 = e ^ {- x} ), entonces (y_2 '= - e ^ {- x} ) y (y_2' '= e ^ {- x} = y_2 ), entonces ( y_2 '' - y_2 = 0 ).

B. Si [ label {eq: 5.1.6} y = c_1e ^ x + c_2e ^ {- x} ] entonces [ label {eq: 5.1.7} y '= c_1e ^ x-c_2e ^ {- x } ] y [y '' = c_1e ^ x + c_2e ^ {- x}, nonumber ]

entonces [ begin {alineado} y '' - y & = (c_1e ^ x + c_2e ^ {- x}) - (c_1e ^ x + c_2e ^ {- x}) & = c_1 (e ^ xe ^ x ) + c_2 (e ^ {- x} -e ^ {- x}) = 0 end {alineado} nonumber ] para todo (x ). Por lo tanto, (y = c_1e ^ x + c_2e ^ {- x} ) es una solución de la Ecuación ref {eq: 5.1.4} en ((- infty, infty) ).

C.

Podemos resolver la Ecuación ref {eq: 5.1.5} eligiendo (c_1 ) y (c_2 ) en la Ecuación ref {eq: 5.1.6} de modo que (y (0) = 1 ) y (y '(0) = 3 ). Establecer (x = 0 ) en la Ecuación ref {eq: 5.1.6} y la Ecuación ref {eq: 5.1.7} muestra que esto es equivalente a

[ begin {alineado} c_1 + c_2 & = 1 c_1-c_2 & = 3. end {alineado} nonumber ]

Resolver estas ecuaciones produce (c_1 = 2 ) y (c_2 = -1 ). Por lo tanto, (y = 2e ^ x-e ^ {- x} ) es la única solución de la Ecuación ref {eq: 5.1.5} en ((- infty, infty) ).

Ejemplo ( PageIndex {2} )

Sea ( omega ) una constante positiva. Los coeficientes de (y ') y (y ) en

[ label {eq: 5.1.8} y '' + omega ^ 2y = 0 ]

son las funciones constantes (p equiv0 ) y (q equiv omega ^ 2 ), que son continuas en ((- infty, infty) ). Por lo tanto, el teorema ( PageIndex {1} ) implica que cada problema de valor inicial para la ecuación ref {eq: 5.1.8} tiene una solución única en ((- infty, infty) ).

  1. Verifique que (y_1 = cos omega x ) y (y_2 = sin omega x ) sean soluciones de la Ecuación ref {eq: 5.1.8} en ((- infty, infty) ).
  2. Verifique que si (c_1 ) y (c_2 ) son constantes arbitrarias, entonces (y = c_1 cos omega x + c_2 sin omega x ) es una solución de la Ecuación ref {eq: 5.1.8 } en ((- infty, infty) ).
  3. Resuelve el problema del valor inicial [ label {eq: 5.1.9} y '' + omega ^ 2y = 0, quad y (0) = 1, quad y '(0) = 3. ]

Solución:

una. Si (y_1 = cos omega x ) entonces (y_1 '= - omega sin omega x ) y (y_1' '= - omega ^ 2 cos omega x = - omega ^ 2y_1 ), entonces (y_1 '' + omega ^ 2y_1 = 0 ). Si (y_2 = sin omega x ) entonces, (y_2 '= omega cos omega x ) y (y_2' '= - omega ^ 2 sin omega x = - omega ^ 2y_2 ), entonces (y_2 '' + omega ^ 2y_2 = 0 ).

B. Si [ label {eq: 5.1.10} y = c_1 cos omega x + c_2 sin omega x ] entonces [ label {eq: 5.1.11} y '= omega (-c_1 sin omega x + c_2 cos omega x) ] y [y '' = - omega ^ 2 (c_1 cos omega x + c_2 sin omega x), nonumber ] entonces [ comenzar {alineado} y '' + omega ^ 2y & = - omega ^ 2 (c_1 cos omega x + c_2 sin omega x) + omega ^ 2 (c_1 cos omega x + c_2 sin omega x) & = c_1 omega ^ 2 (- cos omega x + cos omega x) + c_2 omega ^ 2 (- sin omega x + sin omega x) = 0 end {alineado } nonumber ] para todos (x ). Por lo tanto, (y = c_1 cos omega x + c_2 sin omega x ) es una solución de la Ecuación ref {eq: 5.1.8} en ((- infty, infty) ).

C. Para resolver la ecuación ref {eq: 5.1.9}, debemos elegir (c_1 ) y (c_2 ) en la ecuación ref {eq: 5.1.10} de modo que (y (0) = 1 ) y (y '(0) = 3 ). Establecer (x = 0 ) en la Ecuación ref {eq: 5.1.10} y la Ecuación ref {eq: 5.1.11} muestra que (c_1 = 1 ) y (c_2 = 3 / omega ) . Por lo tanto

[y = cos omega x + {3 over omega} sin omega x nonumber ]

es la única solución de la Ecuación ref {eq: 5.1.9} en ((- infty, infty) ).

El teorema ( PageIndex {1} ) implica que si (k_0 ) y (k_1 ) son números reales arbitrarios, entonces el problema del valor inicial

[ label {eq: 5.1.12} P_0 (x) y '' + P_1 (x) y '+ P_2 (x) y = 0, quad y (x_0) = k_0, quad y' (x_0) = k_1 ]

tiene una solución única en un intervalo ((a, b) ) que contiene (x_0 ), siempre que (P_0 ), (P_1 ) y (P_2 ) sean continuos y (P_0 ) no tiene ceros en ((a, b) ). Para ver esto, reescribimos la ecuación diferencial en la Ecuación ref {eq: 5.1.12} como

[y '' + {P_1 (x) over P_0 (x)} y '+ {P_2 (x) over P_0 (x)} y = 0 nonumber ]

y aplique el teorema ( PageIndex {1} ) con (p = P_1 / P_0 ) y (q = P_2 / P_0 ).

Ejemplo ( PageIndex {3} )

La ecuacion

[ label {eq: 5.1.13} x ^ 2y '' + xy'-4y = 0 ]

tiene la forma de la ecuación diferencial en la Ecuación ref {eq: 5.1.12}, con (P_0 (x) = x ^ 2 ), (P_1 (x) = x ) y (P_2 (x ) = - 4 ), que son todas continuas en ((- infty, infty) ). Sin embargo, dado que (P (0) = 0 ) debemos considerar las soluciones de la Ecuación ref {eq: 5.1.13} en ((- infty, 0) ) y ((0, infty) ). Dado que (P_0 ) no tiene ceros en estos intervalos, el teorema ( PageIndex {1} ) implica que el problema del valor inicial

[x ^ 2y '' + xy'-4y = 0, quad y (x_0) = k_0, quad y '(x_0) = k_1 nonumber ]

tiene una solución única en ((0, infty) ) si (x_0> 0 ), o en ((- infty, 0) ) si (x_0 <0 ).

  1. Verifique que (y_1 = x ^ 2 ) sea una solución de la Ecuación ref {eq: 5.1.13} en ((- infty, infty) ) y (y_2 = 1 / x ^ 2 ) es una solución de la Ecuación ref {eq: 5.1.13} en ((- infty, 0) ) y ((0, infty) ).
  2. Verifique que si (c_1 ) y (c_2 ) son cualquier constante, entonces (y = c_1x ^ 2 + c_2 / x ^ 2 ) es una solución de la Ecuación ref {eq: 5.1.13} en ( (- infty, 0) ) y ((0, infty) ).
  3. Resuelve el problema del valor inicial [ label {eq: 5.1.14} x ^ 2y '' + xy'-4y = 0, quad y (1) = 2, quad y '(1) = 0. ]
  4. Resuelve el problema del valor inicial [ label {eq: 5.1.15} x ^ 2y '' + xy'-4y = 0, quad y (-1) = 2, quad y '(- 1) = 0. ]

Solución:

una. Si (y_1 = x ^ 2 ) entonces (y_1 '= 2x ) y (y_1' '= 2 ), entonces [x ^ 2y_1' '+ xy_1'-4y_1 = x ^ 2 (2) + x (2x) -4x ^ 2 = 0 nonumber ] para (x ) en ((- infty, infty) ). Si (y_2 = 1 / x ^ 2 ), entonces (y_2 '= - 2 / x ^ 3 ) y (y_2' '= 6 / x ^ 4 ), entonces [x ^ 2y_2' ' + xy_2'-4y_2 = x ^ 2 left (6 over x ^ 4 right) -x left (2 over x ^ 3 right) - {4 over x ^ 2} = 0 nonumber ] para (x ) en ((- infty, 0) ) o ((0, infty) ).

B. Si [ label {eq: 5.1.16} y = c_1x ^ 2 + {c_2 over x ^ 2} ] entonces [ label {eq: 5.1.17} y '= 2c_1x- {2c_2 over x ^ 3} ] y [y '' = 2c_1 + {6c_2 over x ^ 4}, nonumber ] entonces [ begin {alineado} x ^ {2} y '' + xy'-4y & = x ^ {2} left (2c_ {1} + frac {6c_ {2}} {x ^ {4}} right) + x left (2c_ {1} x- frac {2c_ {2}} {x ^ {3}} right) -4 left (c_ {1} x ^ {2} + frac {c_ {2}} {x ^ {2}} right) & = c_ {1} ( 2x ^ {2} + 2x ^ {2} -4x ^ {2}) + c_ {2} left ( frac {6} {x ^ {2}} - frac {2} {x ^ {2} } - frac {4} {x ^ {2}} right) & = c_ {1} cdot 0 + c_ {2} cdot 0 = 0 end {alineado} nonumber ] para ( x ) en ((- infty, 0) ) o ((0, infty) ).

C. Para resolver la ecuación ref {eq: 5.1.14}, elegimos (c_1 ) y (c_2 ) en la ecuación ref {eq: 5.1.16} de modo que (y (1) = 2 ) y (y '(1) = 0 ). Establecer (x = 1 ) en la Ecuación ref {eq: 5.1.16} y la Ecuación ref {eq: 5.1.17} muestra que esto es equivalente a

[ begin {alineado} phantom {2} c_1 + phantom {2} c_2 & = 2 2c_1-2c_2 & = 0. end {alineado} nonumber ]

Resolver estas ecuaciones produce (c_1 = 1 ) y (c_2 = 1 ). Por lo tanto, (y = x ^ 2 + 1 / x ^ 2 ) es la única solución de la Ecuación ref {eq: 5.1.14} en ((0, infty) ).

D. Podemos resolver la Ecuación ref {eq: 5.1.15} eligiendo (c_1 ) y (c_2 ) en la Ecuación ref {eq: 5.1.16} de modo que (y (-1) = 2 ) y (y '(- 1) = 0 ). Establecer (x = -1 ) en la Ecuación ref {eq: 5.1.16} y la Ecuación ref {eq: 5.1.17} muestra que esto es equivalente a

[ begin {alineado} fantasma {-2} c_1 + fantasma {2} c_2 & = 2 -2c_1 + 2c_2 & = 0. end {alineado} nonumber ]

Resolver estas ecuaciones produce (c_1 = 1 ) y (c_2 = 1 ). Por lo tanto, (y = x ^ 2 + 1 / x ^ 2 ) es la única solución de la Ecuación ref {eq: 5.1.15} en ((- infty, 0) ).

Aunque el fórmulas para las soluciones de la Ecuación ref {eq: 5.1.14} y la Ecuación ref {eq: 5.1.15} son ambas (y = x ^ 2 + 1 / x ^ 2 ), no debe concluir que estas dos Los problemas de valor inicial tienen la misma solución. Recuerde que se define una solución de un problema de valor inicial en un intervalo que contiene el punto inicial; por lo tanto, la solución de la Ecuación ref {eq: 5.1.14} es (y = x ^ 2 + 1 / x ^ 2 ) en el intervalo ((0, infty) ), que contiene el punto inicial (x_0 = 1 ), mientras que la solución de la Ecuación ref {eq: 5.1.15} es (y = x ^ 2 + 1 / x ^ 2 ) en el intervalo ((- infty, 0) ), que contiene el punto inicial (x_0 = -1 ).

La solución general de una ecuación lineal homogénea de segundo orden

Si (y_1 ) y (y_2 ) se definen en un intervalo ((a, b) ) y (c_1 ) y (c_2 ) son constantes, entonces

[y = c_1y_1 + c_2y_2 nonumber ]

es un combinación lineal de (y_1 ) y (y_2 ). Por ejemplo, (y = 2 cos x + 7 sin x ) es una combinación lineal de (y_1 = cos x ) y (y_2 = sin x ), con (c_1 = 2 ) y (c_2 = 7 ).

El siguiente teorema establece un hecho que ya hemos verificado en Ejemplos ( PageIndex {1} ), ( PageIndex {2} ), ( PageIndex {3} ).

Teorema ( PageIndex {2} )

Si (y_1 ) y (y_2 ) son soluciones de la ecuación homogénea

[ label {eq: 5.1.18} y '' + p (x) y '+ q (x) y = 0 ]

en ((a, b), ) entonces cualquier combinación lineal

[ label {eq: 5.1.19} y = c_1y_1 + c_2y_2 ]

de (y_1 ) y (y_2 ) también es una solución de ( eqref {eq: 5.1.18} ) en ((a, b). )

Prueba

Si [y = c_1y_1 + c_2y_2 nonumber ] entonces [y '= c_1y_1' + c_2y_2 ' quad text {y} quad y' '= c_1y_1' '+ c_2y_2' '. Nonumber ]

Por lo tanto

[ begin {alineado} y '' + p (x) y '+ q (x) y & = (c_1y_1' '+ c_2y_2' ') + p (x) (c_1y_1' + c_2y_2 ') + q (x) (c_1y_1 + c_2y_2) & = c_1 left (y_1 '' + p (x) y_1 '+ q (x) y_1 right) + c_2 left (y_2' '+ p (x) y_2' + q ( x) y_2 right) & = c_1 cdot0 + c_2 cdot0 = 0, end {alineado} nonumber ]

ya que (y_1 ) y (y_2 ) son soluciones de la Ecuación ref {eq: 5.1.18}.

Decimos que ( {y_1, y_2 } ) es un conjunto fundamental de soluciones de ( eqref {eq: 5.1.18} ) en ((a, b) ) si cada solución de la Ecuación ref {eq: 5.1.18} en ((a, b) ) se puede escribir como una combinación lineal de (y_1 ) y ( y_2 ) como en la Ecuación ref {eq: 5.1.19}. En este caso decimos que la Ecuación ref {eq: 5.1.19} es solución general de ( eqref {eq: 5.1.18} ) en ((a, b) ).

Independencia lineal

Necesitamos una forma de determinar si un conjunto dado ( {y_1, y_2 } ) de soluciones de la Ecuación ref {eq: 5.1.18} es un conjunto fundamental. La siguiente definición nos permitirá establecer las condiciones necesarias y suficientes para ello.

Decimos que dos funciones (y_1 ) y (y_2 ) definidas en un intervalo ((a, b) ) son linealmente independiente de ((a, b) ) si ninguno es un múltiplo constante del otro en ((a, b) ). (En particular, esto significa que ninguna de las dos puede ser la solución trivial de la Ecuación ref {eq: 5.1.18}, ya que, por ejemplo, si (y_1 equiv0 ) podríamos escribir (y_1 = 0y_2 ).) También diremos que el conjunto ( {y_1, y_2 } ) es linealmente independiente de ((a, b) ).

Teorema ( PageIndex {3} )

Suponga que (p ) y (q ) son continuas en ((a, b). ) Entonces un conjunto ( {y_1, y_2 } ) de soluciones de

[ label {eq: 5.1.20} y '' + p (x) y '+ q (x) y = 0 ]

en ((a, b) ) es un conjunto fundamental si y solo si ( {y_1, y_2 } ) es linealmente independiente en ((a, b). )

Prueba

Presentaremos la demostración del teorema ( PageIndex {3} ) en pasos que vale la pena considerar como teoremas por derecho propio. Sin embargo, primero interpretemos el Teorema ( PageIndex {3} ) en términos de Ejemplos ( PageIndex {1} ), ( PageIndex {2} ), ( PageIndex {3} ).

Ejemplo ( PageIndex {4} )

Dado que (e ^ x / e ^ {- x} = e ^ {2x} ) no es constante, el teorema ( PageIndex {3} ) implica que (y = c_1e ^ x + c_2e ^ {- x} ) es la solución general de (y '' - y = 0 ) en ((- infty, infty) ).

Dado que ( cos omega x / sin omega x = cot omega x ) no es constante, el teorema ( PageIndex {3} ) implica que (y = c_1 cos omega x + c_2 sin omega x ) es la solución general de (y '' + omega ^ 2y = 0 ) en ((- infty, infty) ).

Dado que (x ^ 2 / x ^ {- 2} = x ^ 4 ) no es constante, el teorema ( PageIndex {3} ) implica que (y = c_1x ^ 2 + c_2 / x ^ 2 ) es la solución general de (x ^ 2y '' + xy'-4y = 0 ) en ((- infty, 0) ) y ((0, infty) ).

La fórmula de Wronskian y Abel

Para motivar un resultado que necesitamos para probar el Teorema ( PageIndex {3} ), veamos qué se requiere para probar que ( {y_1, y_2 } ) es un conjunto fundamental de soluciones de la Ecuación ref {eq: 5.1.20} en ((a, b) ). Sea (x_0 ) un punto arbitrario en ((a, b) ), y suponga que (y ) es una solución arbitraria de la Ecuación ref {eq: 5.1.20} en ((a, b ) ). Entonces (y ) es la única solución del problema del valor inicial

[ label {eq: 5.1.21} y '' + p (x) y '+ q (x) y = 0, quad y (x_0) = k_0, quad y' (x_0) = k_1; ]

es decir, (k_0 ) y (k_1 ) son los números obtenidos al evaluar (y ) y (y ') en (x_0 ). Además, (k_0 ) y (k_1 ) pueden ser cualquier número real, ya que el teorema ( PageIndex {1} ) implica que la ecuación ref {eq: 5.1.21} tiene una solución sin importar cómo ( k_0 ) y (k_1 ) se eligen. Por lo tanto, ( {y_1, y_2 } ) es un conjunto fundamental de soluciones de la Ecuación ref {eq: 5.1.20} en ((a, b) ) si y solo si es posible escribir la solución de un problema de valor inicial arbitrario Ecuación ref {eq: 5.1.21} como (y = c_1y_1 + c_2y_2 ). Esto equivale a exigir que el sistema

[ label {eq: 5.1.22} begin {array} {rcl} c_1y_1 (x_0) + c_2y_2 (x_0) & = k_0 c_1y_1 '(x_0) + c_2y_2' (x_0) & = k_1 end { formación}]

tiene una solución ((c_1, c_2) ) para cada elección de ((k_0, k_1) ). Intentemos resolver la Ecuación ref {eq: 5.1.22}.

Multiplicando la primera ecuación en la Ecuación ref {eq: 5.1.22} por (y_2 '(x_0) ) y la segunda por (y_2 (x_0) ) da como resultado

[ begin {alineado} c_1y_1 (x_0) y_2 '(x_0) + c_2y_2 (x_0) y_2' (x_0) & = y_2 '(x_0) k_0 c_1y_1' (x_0) y_2 (x_0) + c_2y_2 '(x_0 ) y_2 (x_0) & = y_2 (x_0) k_1, end {alineado} ]

y restando la segunda ecuación aquí de la primera, se obtiene

[ label {eq: 5.1.23} left (y_1 (x_0) y_2 '(x_0) -y_1' (x_0) y_2 (x_0) right) c_1 = y_2 '(x_0) k_0-y_2 (x_0) k_1 . ]

Al multiplicar la primera ecuación en la ecuación ref {eq: 5.1.22} por (y_1 '(x_0) ) y la segunda por (y_1 (x_0) ) se obtiene

[ begin {alineado} c_1y_1 (x_0) y_1 '(x_0) + c_2y_2 (x_0) y_1' (x_0) & = y_1 '(x_0) k_0 c_1y_1' (x_0) y_1 (x_0) + c_2y_2 '(x_0 ) y_1 (x_0) & = y_1 (x_0) k_1, end {alineado} ]

y restando aquí la primera ecuación de la segunda, se obtiene

[ label {eq: 5.1.24} left (y_1 (x_0) y_2 '(x_0) -y_1' (x_0) y_2 (x_0) right) c_2 = y_1 (x_0) k_1-y_1 '(x_0) k_0 . ]

Si

[y_1 (x_0) y_2 '(x_0) -y_1' (x_0) y_2 (x_0) = 0, nonumber ]

es imposible satisfacer la Ecuación ref {eq: 5.1.23} y la Ecuación ref {eq: 5.1.24} (y por lo tanto la Ecuación ref {eq: 5.1.22}) a menos que (k_0 ) y (k_1 ) pasar a satisfacer

[ begin {alineado} y_1 (x_0) k_1-y_1 '(x_0) k_0 & = 0 y_2' (x_0) k_0-y_2 (x_0) k_1 & = 0. end {alineado} ]

Por otro lado, si

[ label {eq: 5.1.25} y_1 (x_0) y_2 '(x_0) -y_1' (x_0) y_2 (x_0) ne0 ]

podemos dividir la Ecuación ref {eq: 5.1.23} y la Ecuación ref {eq: 5.1.24} por la cantidad de la izquierda para obtener

[ label {eq: 5.1.26} begin {array} {rcl} c_1 & = {y_2 '(x_0) k_0-y_2 (x_0) k_1 over y_1 (x_0) y_2' (x_0) -y_1 '(x_0 ) y_2 (x_0)} c_2 & = {y_1 (x_0) k_1-y_1 '(x_0) k_0 over y_1 (x_0) y_2' (x_0) -y_1 '(x_0) y_2 (x_0)}, end {matriz } ]

no importa cómo se elijan (k_0 ) y (k_1 ). Esto nos motiva a considerar las condiciones en (y_1 ) y (y_2 ) que implican la Ecuación ref {eq: 5.1.25}.

Teorema ( PageIndex {4} )

Suponga que (p ) y (q ) son continuas en ((a, b), ) sean (y_1 ) y (y_2 ) soluciones de

[ label {eq: 5.1.27} y '' + p (x) y '+ q (x) y = 0 ]

en ((a, b) ), y defina

[ label {eq: 5.1.28} W = y_1y_2'-y_1'y_2. ]

Sea (x_0 ) cualquier punto en ((a, b). ) Entonces

[ label {eq: 5.1.29} W (x) = W (x_0) e ^ {- int ^ x_ {x_0} p (t) : dt}, quad a

Por lo tanto, (W ) no tiene ceros en ((a, b) ) o (W equiv0 ) en ((a, b). )

Prueba

La ecuación de diferenciación ref {eq: 5.1.28} produce

[ label {eq: 5.1.30} W '= y'_1y'_2 + y_1y' '_ 2-y'_1y'_2-y' '_ 1y_2 = y_1y' '_ 2-y' '_ 1y_2. ]

Dado que (y_1 ) y (y_2 ) ambos satisfacen la Ecuación ref {eq: 5.1.27},

[y '' _ 1 = -py'_1-qy_1 quad text {y} quad y '' _ 2 = -py'_2-qy_2. nonumber ]

Sustituyendo estos en la Ecuación ref {eq: 5.1.30} se obtiene

[ begin {align} W '& = -y_1 bigl (py'_2 + qy_2 bigr) + y_2 bigl (py'_1 + qy_1 bigr) & = -p (y_1y'_2-y_2y' _1) -q (y_1y_2-y_2y_1) & = -p (y_1y'_2-y_2y'_1) = - pW. End {alineado} nonumber ]

Por lo tanto (W '+ p (x) W = 0 ); es decir, (W ) es la solución del problema del valor inicial

[y '+ p (x) y = 0, quad y (x_0) = W (x_0). nonumber ]

Dejamos que usted verifique por separación de variables que esto implica la Ecuación ref {eq: 5.1.29}. Si (W (x_0) ne0 ), la ecuación ref {eq: 5.1.29} implica que (W ) no tiene ceros en ((a, b) ), ya que un exponencial nunca es cero. Por otro lado, si (W (x_0) = 0 ), la ecuación ref {eq: 5.1.29} implica que (W (x) = 0 ) para todo (x ) en (( a, b) ).

La función (W ) definida en la Ecuación ref {eq: 5.1.28} es la Wronskian de ( {y_1, y_2 } ). La ecuación de fórmula ref {eq: 5.1.29} es Fórmula de Abel.

El wronskiano de ( {y_1, y_2 } ) generalmente se escribe como determinante

[W = left | begin {array} {cc} y_1 & y_2 y'_1 & y'_2 end {array} right |. nonumber ]

Las expresiones en la Ecuación ref {eq: 5.1.26} para (c_1 ) y (c_2 ) se pueden escribir en términos de determinantes como

[c_1 = {1 sobre W (x_0)} left | begin {matriz} {cc} k_0 & y_2 (x_0) k_1 & y'_2 (x_0) end {matriz} right | quad text {y} quad c_2 = {1 over W (x_0)} left | begin {matriz} {cc} y_1 (x_0) & k_0 y'_1 (x_0) & k_1 end {matriz} right |. nonumber ]

Si ha tomado álgebra lineal, puede reconocer esto como Regla de Cramer.

Ejemplo ( PageIndex {5} )

Verifique la fórmula de Abel para las siguientes ecuaciones diferenciales y las soluciones correspondientes, de Ejemplos ( PageIndex {1} ), ( PageIndex {2} ), ( PageIndex {3} ).

  1. (y '' - y = 0; quad y_1 = e ^ x, ; y_2 = e ^ {- x} )
  2. (y '' + omega ^ 2y = 0; quad y_1 = cos omega x, ; y_2 = sin omega x )
  3. (x ^ 2y '' + xy'-4y = 0; quad y_1 = x ^ 2, ; y_2 = 1 / x ^ 2 )

Solución:

una. Como (p equiv0 ), podemos verificar la fórmula de Abel mostrando que (W ) es constante, lo cual es cierto, ya que

[W (x) = left | begin {matriz} {rr} e ^ x & e ^ {- x} e ^ x & -e ^ {- x} end {matriz} right | = e ^ x (-e ^ {- x }) - e ^ xe ^ {- x} = - 2 nonumber ]

para todo (x ).

B. Nuevamente, dado que (p equiv0 ), podemos verificar la fórmula de Abel mostrando que (W ) es constante, lo cual es cierto, ya que

[ begin {alineado} W (x) & = { left | begin {matriz} {cc} cos omega x & sin omega x - omega sin omega x & omega cos omega x end {matriz} right |} & = cos omega x ( omega cos omega x) - (- omega sin omega x) sin omega x & = omega ( cos ^ 2 omega x + sin ^ 2 omega x) = omega end {alineado} nonumber ]

para todo (x ).

C. Calculando el Wronskiano de (y_1 = x ^ 2 ) y (y_2 = 1 / x ^ 2 ) directamente se obtiene

[ label {eq: 5.1.31} W = left | begin {array} {cc} x ^ 2 & 1 / x ^ 2 2x & -2 / x ^ 3 end {array} right | = x ^ 2 left (- {2 over x ^ 3 } right) -2x left (1 over x ^ 2 right) = - {4 over x}. ]

Para verificar la fórmula de Abel, reescribimos la ecuación diferencial como

[y '' + {1 over x} y '- {4 over x ^ 2} y = 0 nonumber ]

para ver que (p (x) = 1 / x ). Si (x_0 ) y (x ) están ambos en ((- infty, 0) ) o ambos en ((0, infty) ) entonces

[ int_ {x_0} ^ x p (t) , dt = int_ {x_0} ^ x {dt over t} = ln left (x over x_0 right), nonumber ]

entonces la fórmula de Abel se convierte en

[ begin {alineado} W (x) & = W (x_0) e ^ {- ln (x / x_0)} = W (x_0) {x_0 over x} & = - left (4 sobre x_0 right) left (x_0 over x right) quad text {from} eqref {eq: 5.1.31} & = - {4 over x}, end {alineado} nonumber ]

lo cual es consistente con la Ecuación ref {eq: 5.1.31}.

El siguiente teorema nos permitirá completar la demostración del teorema ( PageIndex {3} ).

Teorema ( PageIndex {5} )

Suponga que (p ) y (q ) son continuas en un intervalo abierto ((a, b), ) sean (y_1 ) y (y_2 ) soluciones de

[ label {eq: 5.1.32} y '' + p (x) y '+ q (x) y = 0 ]

en ((a, b), ) y sea (W = y_1y_2'-y_1'y_2. ) Entonces (y_1 ) y (y_2 ) son linealmente independientes en ((a, b) ) si y solo si (W ) no tiene ceros en ((a, b). )

Prueba

Primero mostramos que si (W (x_0) = 0 ) para algunos (x_0 ) en ((a, b) ), entonces (y_1 ) y (y_2 ) son linealmente dependientes de ((a, b) ). Sea (I ) un subintervalo de ((a, b) ) en el que (y_1 ) no tiene ceros. (Si no existe tal subintervalo, (y_1 equiv0 ) en ((a, b) ), entonces (y_1 ) y (y_2 ) son linealmente independientes, y hemos terminado con esta parte del prueba.) Entonces (y_2 / y_1 ) se define en (I ), y

[ label {eq: 5.1.33} left (y_2 over y_1 right) '= {y_1y_2'-y_1'y_2 over y_1 ^ 2} = {W over y_1 ^ 2}. ]

Sin embargo, si (W (x_0) = 0 ), el teorema ( PageIndex {4} ) implica que (W equiv0 ) en ((a, b) ). Por lo tanto, la ecuación ref {eq: 5.1.33} implica que ((y_2 / y_1) ' equiv0 ), entonces (y_2 / y_1 = c ) (constante) en (I ). Esto muestra que (y_2 (x) = cy_1 (x) ) para todo (x ) en (I ). Sin embargo, queremos mostrar que (y_2 = cy_1 (x) ) para todo (x ) en ((a, b) ). Sea (Y = y_2-cy_1 ). Entonces (Y ) es una solución de la Ecuación ref {eq: 5.1.32} en ((a, b) ) tal que (Y equiv0 ) en (I ), y por lo tanto ( Y ' equiv0 ) en (I ). En consecuencia, si (x_0 ) se elige arbitrariamente en (I ) entonces (Y ) es una solución del problema del valor inicial

[y '' + p (x) y '+ q (x) y = 0, quad y (x_0) = 0, quad y' (x_0) = 0, nonumber ]

lo que implica que (Y equiv0 ) en ((a, b) ), según el párrafo que sigue al Teorema ( PageIndex {1} ). (Ver también Ejercicio 5.1.24). Por lo tanto, (y_2-cy_1 equiv0 ) en ((a, b) ), lo que implica que (y_1 ) y (y_2 ) no son linealmente independientes en ((a, b) ) .

Ahora suponga que (W ) no tiene ceros en ((a, b) ). Entonces (y_1 ) no puede ser idénticamente cero en ((a, b) ) (¿por qué no?), Y por lo tanto hay un subintervalo (I ) de ((a, b) ) en que (y_1 ) no tiene ceros. Dado que la ecuación ref {eq: 5.1.33} implica que (y_2 / y_1 ) no es constante en (I ), (y_2 ) no es un múltiplo constante de (y_1 ) en (( a, b) ). Un argumento similar muestra que (y_1 ) no es un múltiplo constante de (y_2 ) en ((a, b) ), ya que

[ left (y_1 over y_2 right) '= {y_1'y_2-y_1y_2' over y_2 ^ 2} = - {W over y_2 ^ 2} nonumber ]

en cualquier subintervalo de ((a, b) ) donde (y_2 ) no tiene ceros.

Ahora podemos completar la demostración del teorema ( PageIndex {3} ). Del teorema ( PageIndex {5} ), dos soluciones (y_1 ) y (y_2 ) de la ecuación ref {eq: 5.1.32} son linealmente independientes de ((a, b) ) si y solo si (W ) no tiene ceros en ((a, b) ). Del teorema ( PageIndex {4} ) y los comentarios motivadores que lo preceden, ( {y_1, y_2 } ) es un conjunto fundamental de soluciones de la ecuación ref {eq: 5.1.32} si y solo si (W ) no tiene ceros en ((a, b) ). Por lo tanto, ( {y_1, y_2 } ) es un conjunto fundamental para la Ecuación ref {eq: 5.1.32} en ((a, b) ) si y solo si ( {y_1, y_2 } ) es linealmente independiente de ((a, b) ).

El siguiente teorema resume las relaciones entre los conceptos discutidos en esta sección.

Teorema ( PageIndex {6} )

Suponga que (p ) y (q ) son continuas en un intervalo abierto ((a, b) ) y sean (y_1 ) y (y_2 ) soluciones de

[ label {eq: 5.1.34} y '' + p (x) y '+ q (x) y = 0 ]

en ((a, b). ) Entonces las siguientes afirmaciones son equivalentes (; ) es decir (, ) son todas verdaderas o todas falsas (. )

  1. La solución general de ( eqref {eq: 5.1.34} ) en ((a, b) ) es (y = c_1y_1 + c_2y_2 ).
  2. ( {y_1, y_2 } ) es un conjunto fundamental de soluciones de ( eqref {eq: 5.1.34} ) en ((a, b). )
  3. ( {y_1, y_2 } ) es linealmente independiente de ((a, b). )
  4. El Wronskiano de ( {y_1, y_2 } ) es distinto de cero en algún punto de ((a, b). )
  5. El Wronskiano de ( {y_1, y_2 } ) es distinto de cero en todos los puntos de ((a, b). )

Podemos aplicar este teorema a una ecuación escrita como

[P_0 (x) y '' + P_1 (x) y '+ P_2 (x) y = 0 nonumber ]

en un intervalo ((a, b) ) donde (P_0 ), (P_1 ) y (P_2 ) son continuos y (P_0 ) no tiene ceros.dd prueba aquí y automáticamente estar oculto

Teorema ( PageIndex {7} )

Suponga que (c ) está en ((a, b) ) y ( alpha ) y ( beta ) son números reales, no ambos cero. Bajo los supuestos del teorema ( PageIndex {7} ), suponga que (y_ {1} ) y (y_ {2} ) son soluciones de la ecuación ref {eq: 5.1.34} tales que

[ label {eq: 5.1.35} alpha y_ {1} (c) + beta y_ {1} '(c) = 0 quad text {y} quad alpha y_ {2} (c ) + beta y_ {2} '(c) = 0. ]

Entonces ( {y_ {1}, y_ {2} } ) no es linealmente independiente en ((a, b). )

Prueba

Dado que ( alpha ) y ( beta ) no son ambos cero, la ecuación ref {eq: 5.1.35} implica que

[ left | begin {array} {ccccccc} y_ {1} (c) & y_ {1} '(c) y_ {2} (c) & y_ {2}' (c) end {array } right | = 0, quad text {so} quad left | begin {array} {cccccc} y_ {1} (c) & y_ {2} (c) y_ {1} '(c ) & y_ {2} '(c) end {matriz} right | = 0 nonumber ]

y el teorema ( PageIndex {6} ) implica la conclusión establecida.


5.1: Ecuaciones lineales homogéneas - Matemáticas

Estas a punto de borra tu trabajo en esta actividad. ¿Seguro que quieres hacer esto?

Versión actualizada disponible

Hay un Versión actualizada de esta actividad. Si actualiza a la versión más reciente de esta actividad, se borrará su progreso actual en esta actividad. Independientemente, se mantendrá su registro de finalización. ¿Cómo te gustaría proceder?

Editor de expresiones matemáticas

Transformación de ecuaciones homogéneas en ecuaciones separables

Ecuaciones no lineales que se pueden transformar en ecuaciones separables

Hemos visto que la ecuación de Bernoulli no lineal se puede transformar en una ecuación separable mediante la sustitución si se elige adecuadamente. Ahora descubramos una condición suficiente para una ecuación diferencial de primer orden no lineal

ser transformable en una ecuación separable de la misma manera. Sustituyendo en (ec .: 2.4.4) se obtiene lo que es equivalente a Si para alguna función, entonces (ec .: 2.4.5) se convierte en lo que es separable. Después de verificar las soluciones constantes de manera que podamos separar las variables para obtener

Ecuaciones no lineales homogéneas

En el texto, consideraremos solo la clase de ecuaciones más ampliamente estudiada para la que funciona el método del párrafo anterior. Otros tipos de ecuaciones aparecen en Ejercicios exer: 2.4.44 – exer: 2.4.51.

Se dice que la ecuación diferencial (ecuación: 2.4.4) es homogéneo si y ocurren de tal manera que depende solo de la razón, es decir, (ecuación: 2.4.4) se puede escribir como

donde es una función de una sola variable. Por ejemplo, y son de la forma (ec .: 2.4.7), con respectivamente. El método general discutido anteriormente se puede aplicar a (ec .: 2.4.7) con (y por lo tanto. Por lo tanto, sustituyendo en (ec .: 2.4.7) rendimientos y separación de variables (después de verificar soluciones constantes tales que) se obtiene

Antes de pasar a los ejemplos, señalamos algo que quizás ya haya notado: la definición de ecuación homogénea dada aquí no es la misma que la definición dada en la Sección 2.1, donde dijimos que una ecuación lineal de la forma es homogénea. No nos disculpamos por esta incoherencia, ya que no la creamos nosotros. Históricamente, homogéneo se ha utilizado de estas dos formas inconsistentes. El que tiene que ver con ecuaciones lineales es el más importante. Esta es la única sección del libro donde se aplicará el significado definido aquí.

Dado que en general no está definido si, consideraremos soluciones de ecuaciones no homogéneas solo en intervalos abiertos que no contienen el punto.

Sustituyendo en (eq: 2.4.8) se obtiene Simplificando y separando variables se obtiene Integrando los rendimientos. Por lo tanto y.

Figura figura: 2.4.2 muestra un campo de dirección y curvas integrales para (ecuación: 2.4.8).


Cálculo temprano trascendental: cálculo integral y multivariable para ciencias sociales

Subsección 5.3.1 ED homogéneos

Un tipo simple, pero importante y útil, de ecuación separable es:

Definición 5.21. DE lineal homogénea de primer orden.

A es uno de la forma ( ds y '+ p (t) y = 0 ) o equivalentemente ( ds y' = -p (t) y text <.> )

Ya hemos visto una ecuación diferencial lineal homogénea de primer orden, a saber, el modelo simple de crecimiento y deterioro (y '= ky text <.> )

Dado que las ecuaciones lineales homogéneas de primer orden son separables, podemos resolverlas de la forma habitual:

donde (P (t) ) es una antiderivada de (- p (t) text <.> ) Como en los ejemplos anteriores, si permitimos (A = 0 ) obtenemos la solución constante (y = 0 texto <.> )

Ejemplo 5.22. Resolver un IVP I.

Resuelve el problema del valor inicial

por lo que la solución general de la ecuación diferencial es

Para calcular el coeficiente constante (A text <,> ) sustituimos:

Para calcular el coeficiente constante (A text <,> ) sustituimos:

Ejemplo 5.23. Resolver un IVP II.

Resuelva el problema del valor inicial (ty '+ 3y = 0 text <,> ) (y (1) = 2 text <,> ) asumiendo (t & gt0 text <.> )

Escribimos la ecuación en forma estándar: (y '+ 3y / t = 0 text <.> ) Entonces

Sustituyendo para encontrar (A text <:> ) ( ds 2 = A (1) ^ <-3> = A text <,> ) para que la solución sea ( ds y = 2t ^ < -3> text <.> )

Subsección 5.3.2 ED no homogéneos

Como puede adivinar, a tiene la forma ( ds y '+ p (t) y = f (t) text <.> ) No solo está estrechamente relacionado en forma con la ecuación lineal homogénea de primer orden, Puede usar lo que sabemos sobre la resolución de ecuaciones homogéneas para resolver la ecuación lineal general.

Definición 5.24. DE lineal no homogénea de primer orden.

Nota: Cuando el coeficiente de la primera derivada es uno en la ecuación diferencial lineal no homogénea de primer orden como en la definición anterior, entonces decimos que la DE está en.

Analicemos ahora cómo podemos encontrar todas las soluciones de una ecuación diferencial lineal no homogénea de primer orden. Suponga que (y_1 (t) ) y (y_2 (t) ) son soluciones de ( ds y '+ p (t) y = f (t) text <.> ) Sea ( ds g (t) = y_1-y_2 text <.> ) Entonces

En otras palabras, ( ds g (t) = y_1-y_2 ) es una solución a la ecuación homogénea ( ds y '+ p (t) y = 0 text <.> ) cualquier solución a la ecuación lineal ( ds y '+ p (t) y = f (t) text <,> ) llámalo (y_1 text <,> ) se puede escribir como (y_2 + g(t) ext<,>) for some particular (y_2) and some solution (g(t)) of the homogeneous equation (ds y' + p(t)y = 0 ext <.>) Since we already know how to find all solutions of the homogeneous equation, finding just one solution to the equation (ds y' + p(t)y = f(t)) will give us all of them.

Theorem 5.25 . General Solution of First Order Non-Homogeneous Linear DE.

Given a first order non-homogeneous linear differential equation

let (h(t)) be a particular solution, and let (g(t)) be the general solution to the corresponding homogeneous DE

Then the general solution to the non-homogeneous DE is constructed as the sum of the above two solutions:

Subsubsection 5.3.2.1 Variation of Parameters

We now introduce the first one of two methods discussed in these notes to solve a first order non-homogeneous linear differential equation. Again, it turns out that what we already know helps. We know that the general solution to the homogeneous equation (ds y' + p(t)y = 0) looks like (ds Ae^ ext<,>) where (P(t)) is an antiderivative of (-p(t) ext<.>) We now make an inspired guess: Consider the function (ds v(t)e^ ext<,>) in which we have replaced the constant parameter (A) with the function (v(t) ext<.>) This technique is called . For convenience write this as (s(t)=v(t)h(t) ext<,>) where (ds h(t)=e^) is a solution to the homogeneous equation. Now let's compute a bit with (s(t) ext<:>)

The last equality is true because (ds h'(t)+p(t)h(t)=0 ext<,>) since (h(t)) is a solution to the homogeneous equation. We are hoping to find a function (s(t)) so that (ds s'(t)+p(t)s(t)=f(t) ext<>) we will have such a function if we can arrange to have (ds v'(t)h(t)=f(t) ext<,>) that is, (ds v'(t)=f(t)/h(t) ext<.>) But this is as easy (or hard) as finding an antiderivative of (ds f(t)/h(t) ext<.>) Putting this all together, the general solution to (ds y' + p(t)y = f(t)) is

Method of Variation of Parameters.

Given a first order non-homogeneous linear differential equation

using variation of parameters the general solution is given by

where (v'(t)=e^<-P(t)>f(t)) and (P(t)) is an antiderivative of (-p(t) ext<.>)

Nota: The method of variation of parameters makes more sense after taking linear algebra since the method uses determinants. We therefore restrict ourselves to just one example to illustrate this method.

Example 5.26 . Solving an IVP Using Variation of Parameters.

Find the solution of the initial value problem (ds y'+3y/t=t^2 ext<,>) (y(1)=1/2 ext<.>)

First we find the general solution since we are interested in a solution with a given condition at (t=1 ext<,>) we may assume (t>0 ext<.>) We start by solving the homogeneous equation as usual call the solution (g ext<:>)

Then as in the discussion, (ds h(t)=t^<-3>) and (ds v'(t)=t^2/t^<-3>=t^5 ext<,>) so (ds v(t)=t^6/6 ext<.>) We know that every solution to the equation looks like

Finally we substitute (y(1)=frac<1><2>) to find (A ext<:>)

Subsubsection 5.3.2.2 Integrating Factor

Another common method for solving such a differential equation is by means of an . In the differential equation (ds y'+p(t)y=f(t) ext<,>) we note that if we multiply through by a function (I(t)) to get (ds I(t)y'+I(t)p(t)y=I(t)f(t) ext<,>) the left hand side looks like it could be a derivative computed by the Product Rule:

Now if we could choose (I(t)) so that (I'(t)=I(t)p(t) ext<,>) this would be exactly the left hand side of the differential equation. But this is just a first order homogeneous linear equation, and we know a solution is (ds I(t)=e^ ext<,>) where (ds Q(t)=int p(t),dt ext<.>) Note that (Q(t)=-P(t) ext<,>) where (P(t)) appears in the variation of parameters method and (P'(t)=-p(t) ext<.>) Now the modified differential equation is

Integrating both sides gives

Definition 5.27 . Integrating Factor.

Given a first order non-homogeneous linear differential equation

Method of Integrating Factor.

Given a first order non-homogeneous linear differential equation

follow these steps to determine the general solution (y(t)) using an integrating factor:

Calculate the integrating factor (I(t) ext<.>)

Multiply the standard form equation by (I(t) ext<.>)

Simplify the left-hand side to

Integrate both sides of the equation.

The solution can be compactly written as

Using this method, the solution of the previous example would look just a bit different.

Example 5.28 . Solving an IVP Using Integrating Factor.

Find the solution of the initial value problem (ds y'+3y/t=t^2 ext<,>) (y(1)=1/2 ext<.>)

Notice that the differential equation is already in standard form. We begin by computing the integrating factor and obtain

Next, we multiply both sides of the DE with (I(t)) and get

Now we integrate both sides with respect to (t) and solve for (y ext<:>)

Lastly, we use the initial value (y(1)=1/2) to find (C ext<:>)

Hence, the solution to the DE is

Example 5.29 . General Solution Using Integrating Factor.

Determine the general solution of the differential equation

We see that the differential equation is in standard form. We then compute the integrating factor as

where we took the arbitrary constant of integration to be zero.

Therefore, we can write the DE as

Integrating both sides with respect to (t) gives

We solve this integral by making the substitution (u=t^3, du = 3t^2,dt ext<:>)

The general solution to the DE is therefore

Exercises for Section 5.3.
Exercise 5.3.1 .

Find the general solution of the following homogeneous differential equations.


MATH 3321 - Engineering Mathematics

Course Description: First order ordinary differential equations and initial value problems higher order differential equations vector spaces, matrices, determinants, eigenvectors and eigenvalues applications to systems of first order equations Laplace transforms.  *Note:  Students may not receive credit for both MATH 3321 and MATH 3331.

Texto: Available in electronic form (PDF) through CASA for all enrolled students via an Access Code *Note: If you misplace/lose your code, you will need to purchase another. There is no exception to this.

Note: Additional important information is contained at your instructor’s personal webpage. You are responsible for knowing all of this information.


All exams will be departmental exams given at CASA .

  1. Introduction to Differential Equations
    • 1.1 Basic Terminology
    • 1.2 n-Parameter Family of Solutions General Solution Particular Solution
    • 1.3 Initial-Value Conditions Initial-Value Problems
  2. First Order Differential Equations
    • 2.1 Linear Equations
    • 2.2 Separable Equations
    • 2.3 Some Applications
    • 2.4 Direction Fields Existence and Uniqueness
    • 2.5 Some Numerical Methods*
  3. Second Order Linear Differential Equations
    • 3.1 Introduction Basic Terminology and Results
    • 3.2 Homogeneous Equations
    • 3.3 Homogeneous Equations with Constant Coefficients
    • 3.4 Nonhomogeneous Equations
    • 3.5 Nonhomogeneous Equations with Constant Coefficients Undetermined Coefficients
    • 3.6 Vibrating Mechanical Systems
  4. Laplace Transforms
    • 4.1 Introduction
    • 4.2 Basic Properties of Laplace Transforms
    • 4.3 Inverse Laplace Transforms and Initial-Value Problems
    • 4.4 Applications to Discontinuous Functions
    • 4.5 Initial-Value Problems with Piecewise Continuous Nonhomogeneous Terms
  5. Linear Algebra
    • 5.1 Introduction
    • 5.2 Systems of Linear Equations Some Geometry
    • 5.3 Solving Systems of Linear Equations
    • 5.4 Solving Systems of Linear Equations, Part 2
    • 5.5 Matrices and Vectors
    • 5.6 Square Matrices Inverse of a Matrix and Determinants
    • 5.7 Vectors Linear Dependence and Linear Independence
    • 5.8 Eigenvalues and Eigenvectors
  6. Systems of First Order Linear Differential Equations
    • 6.1 Higher-Order Linear Differential Equations
    • 6.2 Systems of Linear Differential Equations
    • 6.3 Homogeneous Systems
    • 6.4 Homogeneous Systems with Constant Coefficients
    • 6.5 Nonhomogeneous Systems
    • 6.6 Some Applications

CSD Accommodations:

Academic Adjustments/Auxiliary Aids: The University of Houston System complies with Section 504 of the Rehabilitation Act of 1973 and the Americans with Disabilities Act of 1990, pertaining to the provision of reasonable academic adjustments/auxiliary aids for students who have a disability. In accordance with Section 504 and ADA guidelines, University of Houston strives to provide reasonable academic adjustments/auxiliary aids to students who request and require them. If you believe that you have a disability requiring an academic adjustments/auxiliary aid, please visit   The Center for Students with DisABILITIES (CSD)   website at   http://www.uh.edu/csd/   for more information.

Accommodation Forms: Students seeking academic adjustments/auxiliary aids must, in a timely manner (usually at the beginning of the semester), provide their instructor with a current Student Accommodation Form (SAF) (paper copy or   online   version, as appropriate) from the CSD office before an approved accommodation can be implemented.

Details of this policy, and the corresponding responsibilities of the student are outlined in   The Student Academic Adjustments/Auxiliary Aids Policy (01.D.09)   document under [STEP 4: Student Submission (5.4.1 & 5.4.2), Page 6]. For more information please visit the   Center for Students with Disabilities Student Resources   page.

Additionally, if a student is requesting a (CSD approved) testing accommodation, then the student will also complete a Request for Individualized Testing Accommodations (RITA) paper form to arrange for tests to be administered at the CSD office. CSD suggests that the student meet with their instructor during office hours and/or make an appointment to complete the RITA form to ensure confidentiality.

*Note: RITA forms must be completed at least 48 hours in advance of the original test date. Please consult your   counselor   ahead of time to ensure that your tests are scheduled in a timely manner. Please keep in mind that if you run over the agreed upon time limit for your exam, you will be penalized in proportion to the amount of extra time taken.

Counseling and Psychological Services (CAPS) can help students who are having difficulties managing stress, adjusting to college, or feeling sad and hopeless. You can reach   (CAPS) by calling 713-743-5454 during and after business hours for routine appointments or if you or someone you know is in crisis. No appointment is necessary for the   "Let's Talk"   program, a drop-in consultation service at convenient locations and hours around campus.


Supersymmetry Methods in Random Matrix Theory

Riemannian Symmetric Superspace

The linear equation [17] associates with every point xMETRO a four-dimensional vector space of solutions, Vx. As the point x moves on METRO the vector spaces Vx turn and twist thus, they form what is called a vector bundle V over METRO. (The bundle at hand turns out to be nontrivial, i.e., there exists no global choice of coordinates for it.)

A section of V is a smooth mapping v : M → V such that v ( x ) ∈ V x for all xMETRO. The sections of V are to be multiplied in the exterior sense, as they represent anticommuting degrees of freedom hence the proper object to consider is the exterior bundle, ∧V.

It is a beautiful fact that there exists a unique action of the Lie superalgebra g on the sections of ∧V by first-order differential operators, or derivations for short. (Be advised however that this canonical g -action is not well known in physics or mathematics.)

The manifold METRO is a symmetric space, that is, a Riemannian manifold with GRAMO-invariant geometry. Its metric tensor, g, uniquely extends to a second-rank tensor field (still denoted by g) which maps pairs of derivations of ∧V to sections of ∧V, and is invariant with respect to the g -action. This collection of objects – the symmetric space METRO, the exterior bundle ∧V over it, the action of the Lie superalgebra g on the sections of ∧V, and the g -invariant second-rank tensor g – form what the author calls a “Riemannian symmetric superspace,” M .


Many-Electron Wavefunctions: Slater, Hartree–Fock and Related Methods

7.8.2 General Solution for the Linear Chain

Coulson (1938a , 1938b) gave the general solution for the system of homogeneous linear equations for the linear polyene chain with norte atoms: 7

with the boundary conditions:

The general solution is the “standing” wave:

From the first boundary condition it is obtained:

The general equation gives:

From the second boundary condition it follows that:

Therefore, the general solution for the linear chain will be:


Contents

Trivial example Edit

The system of one equation in one unknown

However, a linear system is commonly considered as having at least two equations.

Simple nontrivial example Edit

The simplest kind of nontrivial linear system involves two equations and two variables:

One method for solving such a system is as follows. First, solve the top equation for x in terms of y :

Now substitute this expression for x into the bottom equation:

A general system of metro linear equations with n unknowns can be written as

Often the coefficients and unknowns are real or complex numbers, but integers and rational numbers are also seen, as are polynomials and elements of an abstract algebraic structure.

Vector equation Edit

One extremely helpful view is that each unknown is a weight for a column vector in a linear combination.

This allows all the language and theory of vector spaces (or more generally, modules) to be brought to bear. For example, the collection of all possible linear combinations of the vectors on the left-hand side is called their span, and the equations have a solution just when the right-hand vector is within that span. If every vector within that span has exactly one expression as a linear combination of the given left-hand vectors, then any solution is unique. In any event, the span has a basis of linearly independent vectors that do guarantee exactly one expression and the number of vectors in that basis (its dimension) cannot be larger than metro o n, but it can be smaller. This is important because if we have metro independent vectors a solution is guaranteed regardless of the right-hand side, and otherwise not guaranteed.

Matrix equation Edit

The vector equation is equivalent to a matrix equation of the form

where A is an metro×n matrix, x is a column vector with n entries, and b is a column vector with metro entries.

The number of vectors in a basis for the span is now expressed as the rank of the matrix.

A solución of a linear system is an assignment of values to the variables x1, x2, . xn such that each of the equations is satisfied. The set of all possible solutions is called the solution set.

A linear system may behave in any one of three possible ways:

  1. The system has infinitely many solutions.
  2. The system has a single unique solution.
  3. The system has no solution.

Geometric interpretation Edit

For a system involving two variables (x y y), each linear equation determines a line on the xy-plane. Because a solution to a linear system must satisfy all of the equations, the solution set is the intersection of these lines, and is hence either a line, a single point, or the empty set.

For three variables, each linear equation determines a plane in three-dimensional space, and the solution set is the intersection of these planes. Thus the solution set may be a plane, a line, a single point, or the empty set. For example, as three parallel planes do not have a common point, the solution set of their equations is empty the solution set of the equations of three planes intersecting at a point is single point if three planes pass through two points, their equations have at least two common solutions in fact the solution set is infinite and consists in all the line passing through these points. [6]

Para n variables, each linear equation determines a hyperplane in n-dimensional space. The solution set is the intersection of these hyperplanes, and is a flat, which may have any dimension lower than n.

General behavior Edit

In general, the behavior of a linear system is determined by the relationship between the number of equations and the number of unknowns. Here, "in general" means that a different behavior may occur for specific values of the coefficients of the equations.

  • In general, a system with fewer equations than unknowns has infinitely many solutions, but it may have no solution. Such a system is known as an underdetermined system.
  • In general, a system with the same number of equations and unknowns has a single unique solution.
  • In general, a system with more equations than unknowns has no solution. Such a system is also known as an overdetermined system.

In the first case, the dimension of the solution set is, in general, equal to nmetro , where n is the number of variables and metro is the number of equations.

The following pictures illustrate this trichotomy in the case of two variables:

One equation Two equations Three equations

The first system has infinitely many solutions, namely all of the points on the blue line. The second system has a single unique solution, namely the intersection of the two lines. The third system has no solutions, since the three lines share no common point.

It must be kept in mind that the pictures above show only the most common case (the general case). It is possible for a system of two equations and two unknowns to have no solution (if the two lines are parallel), or for a system of three equations and two unknowns to be solvable (if the three lines intersect at a single point).

A system of linear equations behave differently from the general case if the equations are linearly dependent, or if it is inconsistent and has no more equations than unknowns.

Independence Edit

The equations of a linear system are independent if none of the equations can be derived algebraically from the others. When the equations are independent, each equation contains new information about the variables, and removing any of the equations increases the size of the solution set. For linear equations, logical independence is the same as linear independence.

For example, the equations

are not independent — they are the same equation when scaled by a factor of two, and they would produce identical graphs. This is an example of equivalence in a system of linear equations.

For a more complicated example, the equations

are not independent, because the third equation is the sum of the other two. Indeed, any one of these equations can be derived from the other two, and any one of the equations can be removed without affecting the solution set. The graphs of these equations are three lines that intersect at a single point.

Consistency Edit

A linear system is inconsistent if it has no solution, and otherwise it is said to be consistent. When the system is inconsistent, it is possible to derive a contradiction from the equations, that may always be rewritten as the statement 0 = 1 .

For example, the equations

are inconsistent. In fact, by subtracting the first equation from the second one and multiplying both sides of the result by 1/6, we get 0 = 1 . The graphs of these equations on the xy-plane are a pair of parallel lines.

It is possible for three linear equations to be inconsistent, even though any two of them are consistent together. For example, the equations

are inconsistent. Adding the first two equations together gives 3x + 2y = 2 , which can be subtracted from the third equation to yield 0 = 1 . Any two of these equations have a common solution. The same phenomenon can occur for any number of equations.

In general, inconsistencies occur if the left-hand sides of the equations in a system are linearly dependent, and the constant terms do not satisfy the dependence relation. A system of equations whose left-hand sides are linearly independent is always consistent.

Putting it another way, according to the Rouché–Capelli theorem, any system of equations (overdetermined or otherwise) is inconsistent if the rank of the augmented matrix is greater than the rank of the coefficient matrix. If, on the other hand, the ranks of these two matrices are equal, the system must have at least one solution. The solution is unique if and only if the rank equals the number of variables. Otherwise the general solution has k free parameters where k is the difference between the number of variables and the rank hence in such a case there are an infinitude of solutions. The rank of a system of equations (i.e. the rank of the augmented matrix) can never be higher than [the number of variables] + 1, which means that a system with any number of equations can always be reduced to a system that has a number of independent equations that is at most equal to [the number of variables] + 1.

Equivalence Edit

Two linear systems using the same set of variables are equivalent if each of the equations in the second system can be derived algebraically from the equations in the first system, and vice versa. Two systems are equivalent if either both are inconsistent or each equation of each of them is a linear combination of the equations of the other one. It follows that two linear systems are equivalent if and only if they have the same solution set.

There are several algorithms for solving a system of linear equations.

Describing the solution Edit

When the solution set is finite, it is reduced to a single element. In this case, the unique solution is described by a sequence of equations whose left-hand sides are the names of the unknowns and right-hand sides are the corresponding values, for example ( x = 3 , y = − 2 , z = 6 ) . When an order on the unknowns has been fixed, for example the alphabetical order the solution may be described as a vector of values, like ( 3 , − 2 , 6 ) for the previous example.

To describe a set with an infinite number of solutions, typically some of the variables are designated as free (or independent, or as parameters), meaning that they are allowed to take any value, while the remaining variables are dependent on the values of the free variables.

For example, consider the following system:

The solution set to this system can be described by the following equations:

Here z is the free variable, while x y y are dependent on z. Any point in the solution set can be obtained by first choosing a value for z, and then computing the corresponding values for x y y.

Each free variable gives the solution space one degree of freedom, the number of which is equal to the dimension of the solution set. For example, the solution set for the above equation is a line, since a point in the solution set can be chosen by specifying the value of the parameter z. An infinite solution of higher order may describe a plane, or higher-dimensional set.

Different choices for the free variables may lead to different descriptions of the same solution set. For example, the solution to the above equations can alternatively be described as follows:

Here x is the free variable, and y y z are dependent.

Elimination of variables Edit

The simplest method for solving a system of linear equations is to repeatedly eliminate variables. This method can be described as follows:

  1. In the first equation, solve for one of the variables in terms of the others.
  2. Substitute this expression into the remaining equations. This yields a system of equations with one fewer equation and one fewer unknown.
  3. Repeat until the system is reduced to a single linear equation.
  4. Solve this equation, and then back-substitute until the entire solution is found.

For example, consider the following system:

Solving the first equation for x gives x = 5 + 2z − 3y , and plugging this into the second and third equation yields

Solving the first of these equations for y yields y = 2 + 3z , and plugging this into the second equation yields z = 2 . We now have:

Substituting z = 2 into the second equation gives y = 8 , and substituting z = 2 and y = 8 into the first equation yields x = −15 . Therefore, the solution set is the single point (x, y, z) = (−15, 8, 2) .

Row reduction Edit

En row reduction (also known as Gaussian elimination), the linear system is represented as an augmented matrix:

This matrix is then modified using elementary row operations until it reaches reduced row echelon form. There are three types of elementary row operations:

Type 1: Swap the positions of two rows. Type 2: Multiply a row by a nonzero scalar. Type 3: Add to one row a scalar multiple of another.

Because these operations are reversible, the augmented matrix produced always represents a linear system that is equivalent to the original.

There are several specific algorithms to row-reduce an augmented matrix, the simplest of which are Gaussian elimination and Gauss–Jordan elimination. The following computation shows Gauss–Jordan elimination applied to the matrix above:

The last matrix is in reduced row echelon form, and represents the system x = −15 , y = 8 , z = 2 . A comparison with the example in the previous section on the algebraic elimination of variables shows that these two methods are in fact the same the difference lies in how the computations are written down.

Cramer's rule Edit

Cramer's rule is an explicit formula for the solution of a system of linear equations, with each variable given by a quotient of two determinants. For example, the solution to the system

For each variable, the denominator is the determinant of the matrix of coefficients, while the numerator is the determinant of a matrix in which one column has been replaced by the vector of constant terms.

Though Cramer's rule is important theoretically, it has little practical value for large matrices, since the computation of large determinants is somewhat cumbersome. (Indeed, large determinants are most easily computed using row reduction.) Further, Cramer's rule has very poor numerical properties, making it unsuitable for solving even small systems reliably, unless the operations are performed in rational arithmetic with unbounded precision. [ citation needed ]

Matrix solution Edit

where A − 1 > is the inverse of A. More generally, regardless of whether metro=n or not and regardless of the rank of A, all solutions (if any exist) are given using the Moore-Penrose pseudoinverse of A, denoted A + > , as follows:

Other methods Edit

While systems of three or four equations can be readily solved by hand (see Cracovian), computers are often used for larger systems. The standard algorithm for solving a system of linear equations is based on Gaussian elimination with some modifications. Firstly, it is essential to avoid division by small numbers, which may lead to inaccurate results. This can be done by reordering the equations if necessary, a process known as pivoting. Secondly, the algorithm does not exactly do Gaussian elimination, but it computes the LU decomposition of the matrix A. This is mostly an organizational tool, but it is much quicker if one has to solve several systems with the same matrix A but different vectors b.

If the matrix A has some special structure, this can be exploited to obtain faster or more accurate algorithms. For instance, systems with a symmetric positive definite matrix can be solved twice as fast with the Cholesky decomposition. Levinson recursion is a fast method for Toeplitz matrices. Special methods exist also for matrices with many zero elements (so-called sparse matrices), which appear often in applications.

A completely different approach is often taken for very large systems, which would otherwise take too much time or memory. The idea is to start with an initial approximation to the solution (which does not have to be accurate at all), and to change this approximation in several steps to bring it closer to the true solution. Once the approximation is sufficiently accurate, this is taken to be the solution to the system. This leads to the class of iterative methods. For some sparse matrices, the introduction of randomness improves the speed of the iterative methods. [7]

A system of linear equations is homogeneous if all of the constant terms are zero:

A homogeneous system is equivalent to a matrix equation of the form

where A is an metro × n matrix, x is a column vector with n entries, and 0 is the zero vector with metro entries.

Homogeneous solution set Edit

Every homogeneous system has at least one solution, known as the zero (or trivial) solution, which is obtained by assigning the value of zero to each of the variables. If the system has a non-singular matrix ( det(A) ≠ 0 ) then it is also the only solution. If the system has a singular matrix then there is a solution set with an infinite number of solutions. This solution set has the following additional properties:

  1. If u y v are two vectors representing solutions to a homogeneous system, then the vector sum u + v is also a solution to the system.
  2. If u is a vector representing a solution to a homogeneous system, and r is any scalar, then ru is also a solution to the system.

These are exactly the properties required for the solution set to be a linear subspace of R n . In particular, the solution set to a homogeneous system is the same as the null space of the corresponding matrix A. Numerical solutions to a homogeneous system can be found with a singular value decomposition.

Relation to nonhomogeneous systems Edit

There is a close relationship between the solutions to a linear system and the solutions to the corresponding homogeneous system:

Specifically, if pag is any specific solution to the linear system Ax = b , then the entire solution set can be described as

Geometrically, this says that the solution set for Ax = b is a translation of the solution set for Ax = 0 . Specifically, the flat for the first system can be obtained by translating the linear subspace for the homogeneous system by the vector pag.

This reasoning only applies if the system Ax = b has at least one solution. This occurs if and only if the vector b lies in the image of the linear transformation A.


Subspaces

3.1 Introduction

Throughout this chapter, we will be studying ℝ n , the set of n-dimensional column vectors with real-valued components. We continue our study of matrices by considering a class of subsets of ℝ n called subspaces. These arise naturally, for example, when we solve a system of n linear homogeneous equations in n unknowns. La row space, column space, y null space of the coefficient matrix play a role in many applications. We also study the concept of linear independence of a set of vectors, which gives rise to the concept of subspace dimension.

We will use the mathematical symbol ∈ that means “contained in.” Por ejemplo, u ∈ ℝ 2 means that u is a vector in the plane, so u = u 1 u 2 , where u1, u2 are real numbers.


STPM Further Mathematics T

Recall that you learnt in the previous section how to model a situation using recurrence relations. The equations are helpful, however, it doesn’t really help much if you are searching for a huge term. For example, the relation an = 2an-1, given the initial condition a0 = 1, finding the term a109 will be tiring, as it will take you forever to get there. When we say that we solve a recurrence relation, it means that we are trying to convert the relation into an equation in terms of n instead of an, which obviously, would be easier for you to calculate the nth term.

In this section, I’ll be showing you how to solve 2nd order homogeneous linear recurrence relations. The non-homogeneous part follows from here in the next section.

2 DISTINCT ROOTS

Given a recurrence relation an = 5an-1 – 6an-2, with initial conditions a0 = 1, a1 = 0. To start off with, we let an = r n . This is a smart guess which we will find eventually that it is correct. We can then further deduce that an-1 = r n-1 , y an-2 = r n-2 . Substituting everything back into the equation, we have

r n = 5r n-1 – 6r n-2

dividing the equation by r n-2 (which is the smallest power), we get

r 2 = 5r – 6
r 2 – 5r + 6 = 0

which is a quadratic equation! This equation is called the characteristic equation, y r is called the characteristic root. Solving the equation, we get r = 2, 3. Again, using a smart guess, we deduce that the term ancan be represented by the equation

So you noticed that the 2 n y 3 n must have came from the characteristic roots earlier on. This is the general solution of the recurrence relation. The terms C1 y C2 are just 2 constants, which we will find by using the initial conditions.

When a0 = 1,
a0 = c1 + c2 = 1 (1)

Now you have 2 simultaneous equations. Using the calculator, you can easily find that C1 = 3, c2 = 𔃀. Substituting the constants back into the equation, you get

an = 3(2 n ) 𔃀(3 n )

which is what we called as the particular solution. This is the final answer that we are looking for. Now that you substitute n = 109, you can get the answer straight away for an! Now that you find the answer, try finding the first 5 or 6 terms, using both the recurrence relation an = 5an-1 – 6an-2and the equation an = 3(2 n ) 𔃀(3 n ). Do they contradict one another? Congratulations, you just learnt how to solve homogeneous recurrence relations!

2 EQUAL ROOTS

However, the above method is only true for 2 distinct roots in the characteristic equation. Take another example, an = 𔃂an-1 & # 8211 4an-2, a0 = 0, a1 = 1. Obtienes una ecuación característica r 2 + 4r + 4 = 0, r = & # 82112. Si toma la solución general como anorte = c1(-2) n , entonces estás totalmente equivocado. La respuesta correcta debe ser anorte = c1(-2) n + nc 2(-2) n. Observe el extra multiplicado norte en el segundo trimestre. Para resumir:

1. Si las raíces características r1y r2están distinto, representarlos como anorte = c1r1 n + c 2r2 n.
2. Si las raíces características r están igual, representarlos como anorte = c1r n + nc 2r2 n.

Las raíces distintas pueden ser verdadero o complejo. El método para ambos es el mismo.

No discutiré los métodos para resolver relaciones de recurrencia de orden superior aquí. Sin embargo, el método es el mismo. Solo representa anorte = r n , y obtendrá ecuaciones lineales, cuárticas o cúbicas, que eventualmente podría resolver y obtener una respuesta. ¿Sencillo? & # 9786


TR 9: 30-10: 50 am, sala 209 Rush.

3 de abril. Introducción. Clasificación de ecuaciones diferenciales. Campos de dirección.

5 de abril. Ecuaciones lineales de primer orden. Factores integradores.

Problemas: 1, 3, 13-15, 38 (2.1) y 1-8 (2.2).

10 de abril. Coeficientes indeterminados y variación de parámetros para ecuaciones de primer orden.

Problemas: 5 (pág. 16), 38 (pág. 41), 1-15 (probabilidades), 22, 27, 29 (págs. 75-77).

12 de abril. Prueba: resolución de EDO de primer orden.

Existencia y singularidad de soluciones. Ecuaciones lineales vs ecuaciones no lineales.

Lectura: pág. 49 (ecuaciones homogéneas), pág. 77 (ecuaciones de Bernoulli).

Problemas: 30-32 (págs. 49-50, no es necesario dibujar) 27, 29 (pág. 77), en ejemplos de clase.

19 de abril. Prueba: ecuaciones homogéneas y de Bernoulli.

24 de abril. Dinámica de la población (continuación). Ecuaciones exactas.

Problemas: 17, 22 (2.5), 1-13 (probabilidades), 19, 32 (2.6).

26 de abril. Ecuaciones exactas y factores integradores.

1 de mayo. Método Euler & # 8217s. Ecuaciones en diferencias.

8 de mayo. Ecuaciones lineales de segundo orden.

Caso de coeficientes constantes: ecuación característica con raíces reales distintas.

Ecuaciones homogéneas: principio de superposición. Determinante de Wronsk.

Problemas: 1-7 (probabilidades), 9-12, 28 (3,1) 1-9 (3,2).

10 de mayo. Independencia lineal y Wronskian. Soluciones fundamentales.

Problemas: 2-14 (pares), 18, 22 (3.5).

15 de mayo. Prueba: Ecuaciones lineales de segundo orden.

Oscilador armónico simple.

Ecuaciones no homogéneas: estructura de soluciones.

Método de coeficientes indeterminados.

Problemas: 1-6, 17-19 (3.4), 23-25 ​​(3.5), 1-3, 13, 15 (3.6).

22 de mayo. Prueba: reducción de orden, raíces complejas de ecuación característica.

El método de coeficientes indeterminados. Ejemplos.

24 de mayo. Variación de parámetros. Ecuaciones de orden superior.

Ecuaciones lineales con coeficientes constantes (orden n).

Problemas: 3, 1, 2, 5 (3.7), 7, 8, 12 (4.1), 11-14, 22, 37 (4.2).

29 de mayo. Prueba 2: Secciones 3.1-3.7, 4.1, 4.2. Muestra. Respuestas al examen.

31 de mayo. Coeficientes indeterminados y variación de parámetros para ecuaciones de orden superior.

Problemas: 1, 3, 5, 13, 14 (4.3) y 1, 3, 5, 13 (4.4).

Problemas: 1, 3, 5, 6, 7, 9 (6.1) y 1, 3, 5, 11-13 (6.2).

7 de junio. Prueba: Transformada de Laplace.

La Transformada de Laplace (continuación).

Problemas: ejemplos de clase.

11 de junio. Sesión de preguntas (4 de la tarde en la oficina).

12 de junio. Examen final, 8-10 a.m., Curtis 451.

Enfoque: tarea, ejemplos (libro de texto, clase), exámenes parciales, muestras, cuestionarios.

Material: todo lo que hemos cubierto en clase. Espere 6-8 preguntas con partes

(verdadero / falso + tipo de medio término). Puede utilizar una hoja de una cara con fórmulas y declaraciones de teoremas.


Ver el vídeo: Sec: Second order linear equations (Noviembre 2021).