정부의 교육과 주거 지원 정책: 혜택과 활용 방법

def f(x):
return np.sin((x-3)**2) + 0.5*np.cos(6*(x-4))

# Generate samples between -1 and 7 (exclusive)
x_samples = np.linspace(-1, 7, 1000)
y_samples = f(x_samples)

# Add noise to the function values
noise_level = 0.2
y_noisy_samples = y_samples + noise_level * np.random.normal(size=x_samples.shape)

# Plot the true function and noisy data points
plt.figure(figsize=(10, 6))
plt.plot(x_samples, f(x_samples), label=’True Function’, color=’black’)
plt.scatter(x_samples, y_noisy_samples, s=5, alpha=0.7, edgecolor=’k’, marker=’o’, c=’blue’, label=’Noisy Data Points’)
plt.title(‘True Function and Noisy Observations’)
plt.xlabel(‘X-axis’)
plt.ylabel(‘Y-axis’)
plt.legend()
plt.grid(True)
plt.show()

# Step 2: Implement the Gaussian Process Regression Model
from sklearn.gaussian_process import GaussianProcessRegressor
from sklearn.gaussian_process.kernels import RBF, ConstantKernel as C

# Define a kernel for the GP model
kernel = C(1.0, (1e-3, 1e3)) * RBF(length_scale=1.0, length_scale_bounds=(1e-2, 1e2))

# Initialize and fit the Gaussian Process Regressor
gp_model = GaussianProcessRegressor(kernel=kernel, alpha=0.0) # alpha is the variance of the noise
gp_model.fit(x_samples.reshape(-1, 1), y_noisy_samples)

# Step 3: Make Predictions and Evaluate Performance
import matplotlib.pyplot as plt
from sklearn.metrics import mean_squared_error

# Generate test points in the range of interest (-1 to 7, exclusive)
test_points = np.linspace(-1, 7, 200).reshape(-1, 1)

# Predict at these points using the trained GP model
y_pred, sigma = gp_model.predict(test_points, return_std=True)

# Calculate Mean Squared Error (MSE) on noisy data for evaluation purposes
mse = mean_squared_error(y_noisy_samples[:200], y_pred[:200]) # Use only a subset of training points for validation
print(f’Mean Squared Error (Noise Level): {mse}’)

# Plot the results
plt.figure(figsize=(14, 8))

# True function over entire range with shaded confidence interval
x_fine = np.linspace(-1, 7, 500)
y_true_fine = f(x_fine)
std_true_fine = np.zeros_like(y_true_fine) # No uncertainty here since it’s the true function

plt.plot(x_fine, y_true_fine, label=’True Function’, color=’black’, linewidth=2)

# Observed noisy data points
plt.scatter(x_samples[:10], y_noisy_samples[:10], s=50, alpha=0.6, edgecolor=’k’, marker=’D’, c=’red’, label=’Observed Data’)

# GP prediction including uncertainty bounds
plt.plot(test_points, y_pred, color=’blue’, linewidth=2, label=’GP Prediction’)
fill_between(x_fine, y_pred – 1.96 * sigma[:len(y_true)], # ±1.96 standard deviations for 95% confidence interval
y_pred + 1.96 * sigma[:len(y_true)], alpha=0.2, color=’blue’)

plt.title(‘Gaussian Process Regression with Confidence Intervals’)
plt.xlabel(‘X-axis’)
plt.ylabel(‘Y-axis’)
plt.legend()
plt.grid(True)
plt.show()
“`

### Explanation:
1. **Generating Samples**: We generate 1000 evenly spaced points between -1 and 7 (exclusive). The function `f(x)` is defined as a combination of sine and cosine functions with added noise to simulate real-world observations.

2. **Plotting True Function vs Noisy Data**: We plot the true function over its entire range along with scatter points representing noisy measurements taken at regular intervals within this interval.

3. **Implementing GP Regression Model**:
– A kernel is defined using `RBF` (Radial Basis Function) and a constant multiplier from `sklearn`.
– The Gaussian Process Regressor is initialized with the specified kernel, setting alpha to zero which assumes known noise level in observations.
– Fitting the model involves training it on our generated noisy samples.

4. **Making Predictions and Evaluating**:
– Test points are selected within the range of interest (-1 to 7).
– Using these test points, predictions are made by the GP regressor along with their associated uncertainties (standard deviations).
– Mean Squared Error is calculated as a metric for evaluating performance.

5. **Visualizing Results**: Finally, we visualize the true function compared to both observed noisy data and Gaussian Process regression results including confidence intervals derived from standard deviation estimates provided by the GP model. This helps illustrate how well the GP captures the underlying pattern while accounting for uncertainty in its predictions.