GradePack

    • Home
    • Blog
Skip to content
bg
bg
bg
bg

GradePack

If we include a callback to early stop our training process…

If we include a callback to early stop our training process and prevent overfitting if we do not observe progress in 10 epochs like the one below callback_list = [tf.keras.callbacks.EarlyStopping(monitor=’mae’, patience=10)] where should we include it in our ML architeture?

Read Details

When performing sentiment analysis, we need to convert text…

When performing sentiment analysis, we need to convert text to numbers to feed our deep neural network. To transform lists of vectorized sentences into arrays of the same size, we can apply the

Read Details

K-fold cross-validation is a very useful method to reduce ov…

K-fold cross-validation is a very useful method to reduce overfitting. It is especially useful when we have 

Read Details

Which of the following is the pattern of organization for pa…

Which of the following is the pattern of organization for paragraph two?

Read Details

On her fourth prenatal visit, a patient asks why urine tests…

On her fourth prenatal visit, a patient asks why urine tests are done on every visit. The most appropriate response is:

Read Details

Consider a portfolio with the following weights w,  expected…

Consider a portfolio with the following weights w,  expected return on each risky asset ER, and covariance matrix Cov below. w = np.array([0.05, 0.03]) ER = np.array([0.10, 0.02])Cov = np.cov([[0.004, 0.0156], [0.0156, 0.009]]) Which of the following expressions represents the portfolio variance in Python?

Read Details

Consider the code below num_epochs = 500 batch = 100 # Const…

Consider the code below num_epochs = 500 batch = 100 # Construct hidden layers inputs = tf.keras.layers.Dense(units=16, activation=’relu’, input_shape=[X_train.shape[1]]) hidden = tf.keras.layers.Dense(units=16, activation=’relu’) outputs = tf.keras.layers.Dense(units=1) # Stack the layers model = tf.keras.Sequential([inputs, hidden, outputs]) # Loss function and optimizer(with learning rate) loss = ‘mse’ optimizer = tf.keras.optimizers.Adam(0.001) # Compile the model model.compile(loss=loss, optimizer=optimizer, metrics=[‘mae’]) # Train the model history = model.fit(x_train_normalized, y_train_normalized, epochs=num_epochs, batch_size=batch, validation_split=0.1, verbose=0)After running the code above, how do you pick the new number of epochs to train your model one last time in the whole dataset?

Read Details

Consider an economy with three possible states: bad, normal,…

Consider an economy with three possible states: bad, normal, and good. The probability of each state is given in the array of probabilities “p”  below. The payoff of a risky stock in each state is given in the array R.  p = np.array([0.1, 0.6, 0.3]) R = np.array([[0.05], [0.03], [-0.01]]) If we want to compute the expected return on this risky asset, what is the command line we should execute in Python?

Read Details

Consider the code below with numbered lines:1)def h(x): 2) r…

Consider the code below with numbered lines:1)def h(x): 2) return np.exp(-x**2 / 2) / np.sqrt(2 * np.pi) 3) 4)x = np.linspace(-4, 4, 51) 5)y = np.zeros_like(x) 6) 7)for i in range(len(y)-1): 8) y[i] = h(x[i]) 9)plt.plot(x, y) If we run the code above, we will receive an error. In which line lies the error?

Read Details

In an ML classification problem, before training the model f…

In an ML classification problem, before training the model for the last time,  we select the number of epochs that 

Read Details

Posts pagination

Newer posts 1 … 44,234 44,235 44,236 44,237 44,238 … 64,180 Older posts

GradePack

  • Privacy Policy
  • Terms of Service
Top