GradePack

    • Home
    • Blog
Skip to content
bg
bg
bg
bg

GradePack

Consider the following code with numbered lines: 1)betai = n…

Consider the following code with numbered lines: 1)betai = np.array([-2, -1.5, -1, -0.5, 0., 0.5, 1, 1.5, 2]) # Features 2)ERi = np.array([-0.08, -0.06, -0.03, -0.01, 0.02, 0.04, 0.07, 0.1 , 0.12]) # Labels 3)hidden = tf.keras.layers.Dense(units=1, input_shape=[1]) 4)model = tf.keras.Sequential([hidden]) 5)loss = ‘mse’ 6)optimizer = ‘Adam’ 7)model.compile(loss=loss, optimizer=optimizer) 8)history = model.fit(betai, ERi, epochs=10000, verbose=False) 9)plt.plot(history.history[‘loss’]) 10)plt.xlabel(‘Number of Epochs’) 11)plt.ylabel(‘Loss’);In which of line does the training of the neural network takes place?

Read Details

Sentence nine is a/n

Sentence nine is a/n

Read Details

Suppose we consider a one-step binomial tree model to price…

Suppose we consider a one-step binomial tree model to price a derivative that pays in the up state and in the down state.  What is the price of this derivative?

Read Details

Consider the pseudo code below to obtain the effient portfol…

Consider the pseudo code below to obtain the effient portfolios:from scipy.optimize import minimize f = lambda w: TO BE FILLED mu = np.linspace(15, 30, 31) sd_optimal = np.zeros_like(mu) w_optimal = np.zeros([31, 5]) for i in range(len(mu)): # Optimization Constraints cons = ({‘type’:’eq’, ‘fun’: lambda w: np.sum(w) – 1}, {‘type’:’eq’, ‘fun’: lambda w: w @ ER * 252 * 100 – mu[i]}) result = minimize(f, np.zeros(5), constraints=cons) w_optimal[i, :] = result.x sd_optimal[i] = np.sqrt(result.fun)Assuming that ER are Cov given, what should we substitute TO BE FILLED for in order to get the desired result?

Read Details

Consider the same pseudo code from the previous question to…

Consider the same pseudo code from the previous question to compute the efficient portfolios:from scipy.optimize import minimize f = lambda w: TO BE FILLED mu = np.linspace(15, 30, 31) sd_optimal = np.zeros_like(mu) w_optimal = np.zeros([31, 5]) for i in range(len(mu)): # Optimization Constraints cons = ({‘type’:’eq’, ‘fun’: lambda w: np.sum(w) – 1}, {‘type’:’eq’, ‘fun’: lambda w: w @ ER * 252 * 100 – mu[i]}) result = minimize(f, np.zeros(5), constraints=cons) w_optimal[i, :] = result.x sd_optimal[i] = np.sqrt(result.fun)For any given iteration i, what is the shape of the array w_optimal[i, :]?

Read Details

The law of diminishing marginal utility helps to explain why…

The law of diminishing marginal utility helps to explain why supply curves are generally upward sloping.

Read Details

Suppose that you want to create a function that receives the…

Suppose that you want to create a function that receives the weights, the expected return on risky assets, and the covariance matrix between the assets, and returns the annualized portfolio volatility and expected return.  def ER_SD(TO BE FILLED): ERp = w @ ER * 100 * 252 SDp = np.array(np.sqrt(w @ Cov @ w)) * 100 * np.sqrt(252) return SDp, ERpWhat should we substitute TO BE FILLED for in order to achieve the desired result?

Read Details

Consider the following array: a = np.array([30, 101, 18, 190…

Consider the following array: a = np.array([30, 101, 18, 190, -55]) If I want to select the last element of this array, what is the only option below that does not return me the element -55?

Read Details

What is the main purpose of scaling your data?

What is the main purpose of scaling your data?

Read Details

Consider the following matrices:

Consider the following matrices:

Read Details

Posts pagination

Newer posts 1 … 49,221 49,222 49,223 49,224 49,225 … 69,162 Older posts

GradePack

  • Privacy Policy
  • Terms of Service
Top