• mobile_menu_1_icon

    Individuals

    • Certifications

    • Programs

  • mobile_menu_2_icon

    Enterprises

  • mobile_menu_3_icon

    Resources

  • mobile_menu_4_icon

    About

Mobile Header Background
Desktop-Menu-bar

What’s New about deep learning?

12745
single-featured-image
By John White Last Updated on Jun 15, 2021

Visual recognition is creating big additions and speech is being driven by Deep Learning, which is different than Machine Learning. DL requires various raw material and greater volume. It’s not like churn models and predictive maintenance. DL is focused on speech, text, and visuals.

What is dissimilar is that all of this is being completed without programming. Neural nets were sleepy. Today they are more popular. Deep Learning is interesting in what it is able to do but needs a new focus. With ML the models are generated by the algorithm. Why is the phrase “cognitive” hot now?

Suggested read: Develop deep learning skills in the organization

Machine Learning is a wide term that basically refers to presenting sensibly curated data to computer algorithms that search patterns and systematically develop models.  While the algorithms are explicitly programmed, the models are not. Carefully curated data is sent to an algorithm that was built by a human.

Supervised ML is given a dataset with a “target variable” and “input variables.” A modeling algorithm automatically generates a model (a formula or ruleset), which establishes a relationship between the target and some or all of the input variables. There are anyplace from one to many algorithms for each use case.

If we have a record of three years of wrong data, we must present it all to the algorithm. Though, we have to remember, when doing managed learning with historical data, it is just going to damage.

Read: Difference between AI, Ml & deep learning

Virtually the whole thing becomes managed to learn with binary interaction because we need to make good decisions. It is harder to come up with an intervention plan for the next medical diagnosis code versus admit/readmit in healthcare. This is why problem definition is important. Regression or a simple decision tree is able to be good because a black box model may not be appropriate.

Then what is unsupervised learning? Not having a target is not sufficient. What creates it unsupervised is when the complete model being correct or wrong does not apply. All unsupervised learning is, is finding natural groupings and determining if they are common or rare. Watch Malcolm Gladwell TedTalk on spaghetti sauce. One-third of Americans prefer chunks of vegetables in their spaghetti sauce. They discovered something they were not looking for.

Computers “teaching themselves?” Google brain has passed from “diagonal line mode” to “cat mode” to “face mode.” Data is becoming more granular.

If I have a set of three billion transactions, may I check some of the transactions? How can you randomly select when you do not have an idea what transactions they belong to? Active customers from the previous year? You cannot go into many rows; however, you generally run few. Deep learning tells different stories with tens or hundreds of millions of rows of data.  All roads lead to binary classification. Most solutions are ultimately deployed as classification models.

Also read: Moving deeper into AI

Real world implemented solutions are not one system, they are a series of systems. Predictive analytics is the selection and analysis of historical data, accumulated during the normal course of doing business. Predictive samples are created by finding and validating previously unknown patterns, implementing models, and scoring the present data to create measurably well data-based decisions.

Now management and HR are underlining the programming side with Python, R, and other technologies. How do we use point-of-sale (POS), healthcare, and transactional data that has not been perfectly defined? We need to be thoughtful about what data is relevant or applies.

Features need to be dynamic in nature. Translate into a predictive analytics problem. Most predictive analytics models produce a propensity score for a specific decision. It determines which outcome is most likely. You require a plausible situation to deploy the model. Start with plausible deployment scenarios. If you get insights on the side, that’s great.

  • Data -> Models -> Scores = the goals
  • Know how you’re going to use your findings upfront.
  • Know that the minimum number of records to score is one.
  • Model building is not computationally complex.
  • How often you run the model depends on how much the variables are changing.
  • Decisions are gathered by data and scores, formulas, and rule sets.

Predictive modeling markup language (PMML) and has been around for 20 years. Most languages are PMML compatible. One of the first big uses of data mining was Landline churn in the 1980s. After the model is developed, how much is the procedure like the scientific method? There is no statement of the hypothesis. There are mostly yes/no questions in statistics. Revisit the KPI’s of the business that drove the project in the first place.

Featured article: Most popular AI models

Perform an initial cost-benefit analysis:

cost benefit analysis

  • Start with what you know.
  • Estimate the annual dollar cost of the problem.
  • The possible solution, a much needed project to defend a team’s effort?