Quantcast
Channel: Active questions tagged python - Stack Overflow
Viewing all articles
Browse latest Browse all 23131

How can I emulate Microsoft Excel's Solver functionality (GRG Nonlinear) in python?

$
0
0

Screenshot demonstrating the use of Excel's Solver:

See Screenshot demonstrating the use of Excel's Solver

I have a task to automate a certain excel worksheet. The worksheet happens to implement a logic with an excel plugin called Solver. It uses a single value(-1.95624) in Cell $O$9 (which is the result of computations highlighted with red and blue ink in the diagram ) as an input value and then returns three values for C, B1 and B2 using an algorithm called "GRG Non linear regression". My task is to emulate this logic in Python. Below is my attempt. The major problem, is I am not getting the same values for C, B1 and B2 as computed by Excel's Solver plugin.

Given these datasets for xData and yData, the correct output should be:C= -2.35443383, B1 = -14.70820051, B2 = 0.0056217

Here's My 1st Attempt:

import numpy, scipy, matplotlibimport pandas as pdimport matplotlib.pyplot as pltfrom scipy.optimize import curve_fitfrom scipy.optimize import differential_evolutionimport warningsxData = numpy.array([-2.59772914040242,-2.28665528866907,-2.29176070881848,-2.31163972446061,-2.28369414349715,-2.27911303233721,-2.28222332344644,-2.39089535619106,-2.32144325648778,-2.17235002006179,-2.22906032068685,-2.42044014499938,-2.71639505549322,-2.65462061336346,-2.47330475191616,-2.33132910807216,-2.33025978869114,-2.61175064230516,-2.92916553244925,-2.987503044973,-3.00367414706232,-1.45507812104723])  # Use the same table name as the parameteryData = numpy.array([0.0692847120775066,0.0922342111029099,0.0918076382491768,0.0901635409944003,0.0924824386284127,0.092867647175396,0.092605957740688,20.0838696111204451,0.0893625419994501,0.102261091024881,0.097171046758256,70.0816272542472914,0.0620128251290935,0.0657047909578125,0.0777509345715382,0.088561321341585,0.088647672874835,90.0683859871424735,0.0507304952495273,0.0479936476914665,0.0472601632188253,0.18922126828463])  # Use the same table name as the parameterdef func(x, a, b, Offset): # Sigmoid A With Offset from zunzun.com    return  1.0 / (1.0 + numpy.exp(-a * (x-b))) + Offset# function for genetic algorithm to minimize (sum of squared error)def sumOfSquaredError(parameterTuple):    warnings.filterwarnings("ignore") # do not print warnings by genetic algorithm    val = func(xData, *parameterTuple)    return numpy.sum((yData - val) ** 2.0)def generate_Initial_Parameters():    # min and max used for bounds    maxX = max(xData)    minX = min(xData)    maxY = max(yData)    minY = min(yData)    parameterBounds = []    parameterBounds.append([minX, maxX]) # search bounds for a    parameterBounds.append([minX, maxX]) # search bounds for b    parameterBounds.append([0.0, maxY]) # search bounds for Offset    # "seed" the numpy random number generator for repeatable results    result = differential_evolution(sumOfSquaredError, parameterBounds, seed=3)    return result.x# generate initial parameter valuesgeneticParameters = generate_Initial_Parameters()# curve fit the test dataparams, covariance = curve_fit(func, xData, yData, geneticParameters,maxfev=50000)  # Convert parameters to Python built-in typesparams = [float(param) for param in params]  # Convert numpy float64 to Python floatC, B1, B2 = paramsOutputDataSet = pd.DataFrame({"C": [C], "B1": [B1], "B2": [B2],"ProType":[input_value_1],"RegType":[input_value_2]})

Any Ideas will be helpful? Thanks in advance

Here's My 2nd Attempt: I've changed the objective function.

import numpy as npimport pandas as pdfrom scipy.optimize import curve_fit# Access input data passed from SQL Serverdatasets = pd.DataFrame(InputDataSet)def logistic_regression(x, C, B1, B2):    return C / (1 + np.exp(-B1 * (x - B2)))def initial_coefficients(num_features):    return np.random.randn(num_features)# Fetch x_data and y_data from SQL Server           x_data = np.array([-2.59772914040242,-2.28665528866907,-2.29176070881848,-2.31163972446061,-2.28369414349715,-2.27911303233721,-2.28222332344644,-2.39089535619106,-2.32144325648778,-2.17235002006179,-2.22906032068685,-2.42044014499938,-2.71639505549322,-2.65462061336346,-2.47330475191616,-2.33132910807216,-2.33025978869114,-2.61175064230516,-2.92916553244925,-2.987503044973,-3.00367414706232,-1.45507812104723])y_data = np.array([0.0692847120775066,0.0922342111029099,0.0918076382491768,0.0901635409944003,0.0924824386284127,0.092867647175396,0.092605957740688,20.0838696111204451,0.0893625419994501,0.102261091024881,0.097171046758256,70.0816272542472914,0.0620128251290935,0.0657047909578125,0.0777509345715382,0.088561321341585,0.088647672874835,90.0683859871424735,0.0507304952495273,0.0479936476914665,0.0472601632188253,0.18922126828463])initial_guess = initial_coefficients(3); # Example initial guess    # Fit the logistic regression function to the dataparams, covariance = curve_fit(logistic_regression, x_data, y_data, p0=initial_guess, maxfev=5000)# Convert parameters to Python built-in typesparams = [float(param) for param in params]  # Convert numpy float64 to Python floatC, B1, B2 = paramsOutputDataSet = pd.DataFrame({"C": [C], "B1": [B1], "B2": [B2],"ProType":[input_value_1],"RegType":[input_value_2]})

But still didn't hit the desired result ofC= -2.35443383, B1 = -14.70820051, B2 = 0.0056217


Viewing all articles
Browse latest Browse all 23131

Trending Articles



<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>