Installed Mono and MonoGame on a Mac. I've very pleased that after very little effort, I got my Space Invaders clone to work! The majority of the changes involved removing the XBox code, so that means I'll have to create a scoreboard.
Nigel's Programming Corner
Sunday, December 13, 2020
Thursday, June 18, 2020
Functional Programming
Just watched in interesting video on YouTube called
Learning Functional Programming with JavaScript from Anjana Vakil. I found it an interesting introduction for functional programming. At the end, she introduced an article by Mary Rose called An introduction to functional programming, which teaches the concepts through code blocks.
Some of the basic building blocks of functional programming are:
- Data is immutable -- don't change data.
- Use first class functions: They should only operate on data passed in and not change anything outside of that function.
- Do not use loops -- use functions instead.
Tuesday, August 27, 2019
Automating API Throughput Reports
One of my monthly tasks is a status
report that includes statistics on the throughput of our APIs. I do this by
using the data gathered by our API Manager. Before I got smart, I would extract
the data through the UI week by week, put it into a spreadsheet, chop off the
extra headers, and try to convince Excel to produce a set of graphs that were
stylized the same way as the previous month's. This often took more time than I
liked.
Upon reading about using MatPlotLib
and Pandas, I realized this could handily be automated with Python. My first
attempt pulled the data from the CSV I pulled from the API Manager, but then I
wrote a function to pull it from SQL Server using PyODBC:
def
query_api_mgr_monthly(month, year, contract):
conn = pyodbc.connect(r'Driver={SQL Server};'
'Server=;'
'Database=;'
'Trusted_Connection=yes')
query_str
= "select CONVERT(varchar, START_DTS, 1) [date], sum(SUCCESS_COUNT)
successes, sum(ERROR_COUNT) failures, avg(MIN_RESP_TIME) minResp,
avg(MAX_RESP_TIME) maxResp "
query_str
+= "from [sql_soa_prod_svc].METRICS_DAILY_VIEW "
query_str += "where START_DTS >= '" + str(month) + "/1/"
+ str(year) +"' "
query_str
+= "and CONTRACT_NAME like '%s%%' " % contract
query_str += "group by START_DTS "
query_str
+= "order by START_DTS asc"
apiMgr
= pd.read_sql_query(sql=query_str, con=conn)
return
apiMgr
There's one function to plot the
traffic through the API Manager: The number of success per hour day, the number
of failures and the monthly average number of transactions per day.
def plot_traffic(apiMgr,
title):
#calculate average number of transactions
total = [apiMgr["successes"][i]+apiMgr["failures"][i] for i
in range(len(apiMgr["successes"]))]
ave
= np.average(total)
aveline = list(it.repeat(ave, len(apiMgr["successes"])))
#
Build graph of daily transaction counts
plt.plot(apiMgr["date"], apiMgr["successes"],
label="successes")
plt.plot(apiMgr["date"], apiMgr["failures"], label =
"failures")
plt.plot(aveline, label="average")
plt.xlabel('date')
plt.ylabel("transactions")
plt.xticks(ticks=apiMgr["date"], labels=apiMgr["date"],
rotation="vertical")
plt.legend()
plt.title(title)
fig_size = plt.rcParams["figure.figsize"]
fig_size[0] = 10
fig_size[1] = 4
plt.rcParams["figure.figsize"] = fig_size
plt.show()
Another function to plot the maximum
and minimum response times per day:
def
plot_response_times(apiMgr, title):
#
Build graph of minimum and maximum response times
plt.plot(apiMgr["date"], apiMgr["minResp"], apiMgr["maxResp"])
plt.xlabel('date')
plt.ylabel("response time in milliseconds")
plt.xticks(ticks=apiMgr["date"], labels=apiMgr["date"],
rotation="vertical")
plt.legend(["minimum","maximum"])
plt.title(title)
fig_size = plt.rcParams["figure.figsize"]
fig_size[0] = 10
fig_size[1] = 4
fig_size = plt.rcParams["figure.figsize"]
plt.show()
Finally, I query the database for
the summary to transactions processed every month for the last year:
def query_apiMgr_monthly(month,
year, contract):
conn = pyodbc.connect(r'Driver={SQL Server};'
'Server=;'
'Database=;'
'Trusted_Connection=yes')
query_str
= "select CONVERT(varchar, START_DTS, 1) [date], sum(SUCCESS_COUNT)
successes, sum(ERROR_COUNT) failures, avg(MIN_RESP_TIME) minResp,
avg(MAX_RESP_TIME) maxResp "
query_str
+= "from [sql_soa_prod_svc].METRICS_DAILY_VIEW "
query_str += "where START_DTS >= '" + str(month) + "/1/"
+ str(year) +"' "
query_str
+= "and CONTRACT_NAME like '%s%%' " % contract
query_str += "group by START_DTS "
query_str
+= "order by START_DTS asc"
apiMgr
= pd.read_sql_query(sql=query_str, con=conn)
return
apiMgr
And Graph it:
def plot_annual_use(apiMgr):
#
Build graph of minimum and maximum response times
plt.plot(apiMgr["Month"], apiMgr["Count"])
plt.xlabel('Month')
plt.ylabel("Total Transactions x 10,000,000")
plt.xticks(ticks=apiMgr["Month"], labels=apiMgr["Month"],
rotation="vertical")
plt.title("ApiMgr Monthly Totals")
fig_size = plt.rcParams["figure.figsize"]
fig_size[0] = 10
fig_size[1] = 4
fig_size = plt.rcParams["figure.figsize"]
plt.show()
Finally, the imports and driver function:
This now gives me some nice, consistent graphs in just the time it takes to run the script in Spyder.
import pandas as pd
import matplotlib.pyplot as plt
import pyodbc
import seaborn as sns
import itertools as it
import numpy as np
sns.set()
def graph_apiMgr(month, year) :
apiMgr = query_apiMgr_monthly(month, year, "")
plot_traffic(apiMgr, "Daily Transaction Count")
plot_response_times(apiMgr, "Minimum and Maximum Response Times")
apiMgr = query_apiMgr_annual()
plot_annual_use(apiMgr)
apiMgr = query_akana_monthly(month, year, "contract1")
plot_traffic(apiMgr, "contract1 Daily Transaction Count")
plot_response_times(apiMgr, "contract1 Minimum and Maximum Response Times")
This now gives me some nice, consistent graphs in just the time it takes to run the script in Spyder.
Tuesday, July 9, 2019
Finding needles in the haystack
An article in Vice today talks about a team that used a machine learning algorithm to analyze papers on material science. They used the Word2Vec algorithm to find word associations across the papers and thereby suggest findings that might overwise be missed. I find these kinds machine learning very exciting.
Thursday, May 23, 2019
Hands-On Machine Learning with Python
I've started reading this book on the subject of machine learning using python. A couple years ago, I completed Andrew Ng's Coursera class on machine learning. It was great, but it went a lot into the mechanics of each algorithm. That gave me a good perspective on those algorithms. Today I want to drill into the application of those algorithms, applying them to data science and machine learning. I am only 10% through and am already excited about the capabilities of the Python libraries.
Monday, May 29, 2017
COBOL on Windows
For better or worse, I decided to learn COBOL. After a bit of research, I came across GNU Cobol, which had been Open COBOL in a future life. I wanted this to work in Windows and that's where the trouble began.
The easiest route was to run GNU Cobol in the Linux Layer for Windows, otherwise known as Bash for Windows 10. In Bash, it was simple enough to "sudo apt-get install open-cobol", which installs GNU Cobol 1.1, gcc, and everything else needed to build COBOL programs. Of course, this solution produced Linux binaries.
Getting GNU Cobol producing Win32/64 binaries was quite a bit more tedious. GNU doesn't distribute binaries, so it's up to one to build it yourself or find someone who has built it already. In the end I used the binary distribution from kiska.net for x64. However, GNU Cobol only generates C++ code and executes GCC to build the executable. The distribution from kiska didn't include the GCC, so I installed MinGW and the GCC from there.
Next, I had to configure the following environment variables for the COBOL compiler:
COB_CONFIG_DIR=c:\Program Files\OpenCOBOL\config
COB_COPY_DIR=c:\Program Files\OpenCOBOL\copy
COB_LIBRARY_PATH=C:\Program Files\OpenCOBOL\lib
COB_SCREEN_ESC=Y
COB_SCREEN_EXCEPTIONS=Y
The location of the MinGW bin directory and the GNU Cobol bin directory have to be added to the PATH.
Finally, the COBOL headers and libraries had to be added to MinGW where the GCC could find them. This included libcob.h, gmp.h, libcob, libpdcurses*, libgmp*, libdb*, libcob* from the lib and include directories.
With all that done, I am now able to produce 64-bit binaries for Windows.
The easiest route was to run GNU Cobol in the Linux Layer for Windows, otherwise known as Bash for Windows 10. In Bash, it was simple enough to "sudo apt-get install open-cobol", which installs GNU Cobol 1.1, gcc, and everything else needed to build COBOL programs. Of course, this solution produced Linux binaries.
Getting GNU Cobol producing Win32/64 binaries was quite a bit more tedious. GNU doesn't distribute binaries, so it's up to one to build it yourself or find someone who has built it already. In the end I used the binary distribution from kiska.net for x64. However, GNU Cobol only generates C++ code and executes GCC to build the executable. The distribution from kiska didn't include the GCC, so I installed MinGW and the GCC from there.
Next, I had to configure the following environment variables for the COBOL compiler:
COB_CONFIG_DIR=c:\Program Files\OpenCOBOL\config
COB_COPY_DIR=c:\Program Files\OpenCOBOL\copy
COB_LIBRARY_PATH=C:\Program Files\OpenCOBOL\lib
COB_SCREEN_ESC=Y
COB_SCREEN_EXCEPTIONS=Y
The location of the MinGW bin directory and the GNU Cobol bin directory have to be added to the PATH.
Finally, the COBOL headers and libraries had to be added to MinGW where the GCC could find them. This included libcob.h, gmp.h, libcob, libpdcurses*, libgmp*, libdb*, libcob* from the lib and include directories.
With all that done, I am now able to produce 64-bit binaries for Windows.
Wednesday, July 29, 2015
(2^i)*(5^j)
I ran into an interesting problem the other day, someone asked for all positive integers of i & j, provide an ordered list of the solutions to (2^i)(5^j). After wracking my brains for quite a couple hours, I finally came up with the following solution. It work by finding integers for which there is an i and j in a sieve fashion. This works by solving for i, such that i = log2 (potential / (5^j)). This will allow the program to output as many values as the representation can support. I limit the number of values of j that I test for by limiting it to 5^j < potential. This gives the program O(n).
class Program
{
static void Main()
{
int potential = 0;
do
{
if (ExistsIandJ(potential))
Console.WriteLine("{0}", potential);
potential++;
} while (potential <= int.MaxValue)
}
private static bool ExistsIandJ(int potential)
{
// potential = (2^i)*(5^j)
// 1 = (2^i)*(5^j)/potential
// 1/(2^1) = (5^j)/potential or (2^i) = potential / (5^j)
// i = log2 (potential / (5^j))
for (var j = 0; Math.Pow(5,j) <= potential; j++)
{
var i = Math.Log(potential / Math.Pow(5, j), 2);
if (i == Math.Truncate(i))
return true;
}
return false;
}
}
I was expecting this to be fairly easy to begin with, the realized that wasn't going to be true. I then thought there must be a pattern to the values to i & j, but after a while realized there was not. Trees might be the answer, but only for a finite number of values, since you would have to determine when to prune the tree. Would you miss an answer. It was later that I realized that if I tested every answer for an i & j that it could be done infinitely. I pondered how to determine i & j, then decided to solve for i. The pieces fell into place.
class Program
{
static void Main()
{
int potential = 0;
do
{
if (ExistsIandJ(potential))
Console.WriteLine("{0}", potential);
potential++;
} while (potential <= int.MaxValue)
}
private static bool ExistsIandJ(int potential)
{
// potential = (2^i)*(5^j)
// 1 = (2^i)*(5^j)/potential
// 1/(2^1) = (5^j)/potential or (2^i) = potential / (5^j)
// i = log2 (potential / (5^j))
for (var j = 0; Math.Pow(5,j) <= potential; j++)
{
var i = Math.Log(potential / Math.Pow(5, j), 2);
if (i == Math.Truncate(i))
return true;
}
return false;
}
}
I was expecting this to be fairly easy to begin with, the realized that wasn't going to be true. I then thought there must be a pattern to the values to i & j, but after a while realized there was not. Trees might be the answer, but only for a finite number of values, since you would have to determine when to prune the tree. Would you miss an answer. It was later that I realized that if I tested every answer for an i & j that it could be done infinitely. I pondered how to determine i & j, then decided to solve for i. The pieces fell into place.
Subscribe to:
Posts (Atom)