Part Two: Deep Learning Indaba.
Hello fellow techies and tech enthusiasts. First and foremost thank you guys for the positive feedback from the last blog(part one). I am really happy that you got to grasp something new from what was taught at the Indaba.
Back to business😁
TUESDAY SESSIONS
The day session started with the awarding of the Kambule Awards.
What do I mean Kambule Awards?🤷♀️I got you,
The Kambule Doctoral Award celebrates African research excellence: its recipients are those that uphold Thamsanqa Kambule’s legacy as a defender of learning, a seeker of knowledge, and activist for equality.
The award was presented at the annual Deep Learning Indaba in August 2019. In partnership with the Africa Oxford Initiative, the winner received a fully-sponsored trip to present their Ph.D. research at the University of Oxford. The winners were invited to speak at the Deep Learning Indaba in Nairobi in August 2019 and received a cash prize of KES 70,000.
Shout out to the Winners and Honourable Mentions.
WINNERS
Marcellin Atemkeng
Rhodes University, South Africa
Ph.D. Thesis “Data Compression, Field of Interest Shaping and Fast Algorithms for Direction-dependent Deconvolution Radio Interferometry”
Hicham Hammouchi
International Sidi MohammedBen Abdellah University, Morocco
Masters Thesis “Reconnaissance visuelle de la parole Appliquee aux laryngectomies(“Lip reading with Hahn Convolutional Neural Networks moments”)
HONOURABLE MENTIONS
Stephanie Muller
Ph.D. Thesis” Genome-Wide Association Between Human Genotypes and Mycobacterium tuberculosis Clades Causing Diseases.
Mhlasakululeka Mvubu
Stellenbosch University, South Africa
Masters Thesis “An Error Correction Neural Network for Stock Market Prediction”.
KEYNOTE: RUHA BENJAMIN
Prof. Ruha Benjamin is an Associate Professor of African American Studies at Princeton University, where she studies the social dimensions of science, technology, and medicine. She is also the founder of the JUST DATA Lab and the author of two books, People’s Science (Stanford) and Race After Technology (Polity), and editor of Captivating Technology (Duke). She writes, teaches, and speaks widely about the relationship between knowledge and power, race and citizenship, health and justice.
From Ruha: I arrived here by way of a winding road that has snaked through South Central Los Angeles; Conway, South Carolina; Majuro, South Pacific, and Swaziland, Southern Africa. I come from many Souths, and I tend to bring this perspective, of looking at the world from its underbelly, to my analysis.
Ruha spoke on Beyond Buzzwords: Innovation, Imagination, and Inequity in the 21st Century.
I must add on Ruha’s dreads are killing it👌 plus the amazing keynote she gave.
Again for the slides email me at rose.delilahgesicho@gmail.com I will forward them.
PRACTICAL: CONVOLUTIONAL NETWORKS
Convolution and the convolutional layer are the major building blocks used in convolutional neural networks.
Convolution is the simple application of a filter to an input that results from inactivation. Repeated application of the same filter to an input results in a map of activations called a feature map, indicating the locations and strength of a detected feature in input, such as an image.
The innovation of convolutional neural networks is the ability to automatically learn a large number of filters in parallel specific to a training dataset under the constraints of a specific predictive modeling problem, such as image classification. The result is highly specific features that can be detected anywhere on input images.
Reference from machinelearningmastery.com
To dive into the practical,
SESSION 3: RECURRENT NEURAL NETWORKS
Recurrent Neural Network(RNN) is a type of Neural Network where the output from the previous step is fed as input to the current step. In traditional neural networks, all the inputs and outputs are independent of each other, but in cases like when it is required to predict the next word of a sentence, the previous words are required and hence there is a need to remember the previous words. Thus RNN came into existence, which solved this issue with the help of a Hidden Layer. The main and most important feature of RNN is Hidden state, which remembers some information about a sequence.
RNN has a “memory” which remembers all information about what has been calculated. It uses the same parameters for each input as it performs the same task on all the inputs or hidden layers to produce the output. This reduces the complexity of parameters, unlike other neural networks.
Reference from geeksforgeeks
Awesome content highly recommend this practical
HELLO WEDNESDAY,
STUDENTS TALKS
The IndabaX was introduced as a way to experiment with the ways in which we can strengthen our Machine Learning community, and allow more people to contribute to the conversation.
A Deep Learning Indaba𝕏 is a locally-organized, one-day Indaba that helps spread knowledge and builds capacity in machine learning.
The students who participated gave their talks on the projects they were working on.
Maina Elizaphan Muuro Kenyatta University
Symposium Keynote on enhancing learning by use of Artificial Intelligence Technique.
POSTER SESSIONS
The poster sessions where displayed but due to the large number and great very amazing ideas behind the posters I cant list individually which each was about but I can post all the participants for it was a great work done by all and the effort put to explain to each passer-by(my bad🌚😊data scientists)what your project is about. That must be highly appreciated.
THE WINNERS FOR THE BEST POSTERS WERE
FIRST RUNNER UP:KALE-AB TESSERA.
SECOND RUNNER UP:AYODELE OLABIYI
Shout out to Ayodele such a happy soul this was well deserved.👍
Congratulations to the winners and all participants.
Applause👏👏👏👏👏👏👏👏👏👏👏👏👏👏👏👏👏👏👏👏👏👏
Those are the events that took place on Tuesday and Wednesday.
Quote:
The great aim of education is not knowledge but action.”
― Herbert Spencer.
Hope you grasped a thing or two from Part Two …Have a great time, techies😊
Audacious💖