Apache Spark Streaming | Real-time data processing for Hadoop | Big Data Tutorial

Опубликовано: 04 Декабрь 2022
на канале: AmpCode
5,902
50

This lecture is all about Apache Spark Streaming where we have seen how to process real-time data streams using Apache Spark using its Spark Streaming API which allows multiple sources such as Kafka, file system, Cassandra, MySQL, HBase and processes the data in real time efficiently and reliably and uses sink as many options like HDFS, Kafka, Cassandra, MySQL, HBase etc.

In the previous lecture we have streaming data to HDFS using Apache Flume where we have set up the Flume Agent to listen to a directory in HDP Sandbox using SpoolDir as a source and streamed to HDFS directory in real time.

Commands for this lecture:

wget https://raw.githubusercontent.com/ash...

cd /usr/hdp/current/flume-server

bin/flume-ng agent --conf conf --conf-file /home/maria_dev/flume_logs.conf --name a1 -Dflume.root.logger=INFO, console

mkdir spool

On Ambari, create directory under user/maria_dev/flume

cp access_log.txt spool/logs.txt

Go to Ambari and verify

----------------------------------------------------------------------------------------------------------------------
HDP Sandbox Installation links:

Oracle VM Virtualbox: https://download.virtualbox.org/virtu...

HDP Sandbox link: https://archive.cloudera.com/hwx-sand...

HDP Sandbox installation guide: https://hortonworks.com/tutorial/sand...

-------------------------------------------------------------------------------------------------------------

Also check out similar informative videos in the field of cloud computing:

What is Big Data:    • What is Big Data? | Big Data Use Case...  

How Cloud Computing changed the world:    • How Cloud Computing changed the world!  

What is Cloud?    • What is Cloud Computing?  

Top 10 facts about Cloud Computing that will blow your mind!    • Top 10 facts about Cloud Computing th...  

Audience

This tutorial is made for professionals who are willing to learn the basics of Big Data Analytics using Hadoop Ecosystem and become a Hadoop Developer. Software Professionals, Analytics Professionals, and ETL developers are the key beneficiaries of this course.

Prerequisites

Before you start proceeding with this course, I am assuming that you have some basic knowledge to Core Java, database concepts, and any of the Linux operating system flavors.

---------------------------------------------------------------------------------------------------------------------------

Check out our full course topic wise playlist on some of the most popular technologies:

SQL Full Course Playlist-
   • SQL Full Course  


PYTHON Full Course Playlist-
   • Python Full Course  

Data Warehouse Playlist-

   • Data Warehouse Full Course  


Unix Shell Scripting Full Course Playlist-
   • Unix Shell Scripting Full Course  

-----------------------------------------------------------------------------------------------------------------------Don't forget to like and follow us on our social media accounts:

Facebook-
  / ampcode  

Instagram-
  / ampcode_tutorials  

Twitter-
  / ampcodetutorial  

Tumblr-
ampcode.tumblr.com

-------------------------------------------------------------------------------------------------------------------------

Channel Description-

AmpCode provides you e-learning platform with a mission of making education accessible to every student. AmpCode will provide you tutorials, full courses of some of the best technologies in the world today.By subscribing to this channel, you will never miss out on high quality videos on trending topics in the areas of Big Data & Hadoop, DevOps, Machine Learning, Artificial Intelligence, Angular, Data Science, Apache Spark, Python, Selenium, Tableau, AWS , Digital Marketing and many more.

#bigdata #datascience #dataanalytics #datascientist #hadoop #hdfs #hdp #mongodb #cassandra #hbase #nosqldatabase #nosql #pyspark #spark #presto #hadooptutorial #hadooptraining