Home
Jobs

7796 Spark Jobs - Page 30

Filter Interviews
Min: 0 years
Max: 25 years
Min: ₹0
Max: ₹10000000
Setup a job Alert
Filter
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

3.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Linkedin logo

Key Responsibilities Set up and maintain monitoring dashboards for ETL jobs using Datadog, including metrics, logs, and alerts. Monitor daily ETL workflows and proactively detect and resolve data pipeline failures or performance issues. Create Datadog Monitors for job status (success/failure), job duration, resource utilization, and error trends. Work closely with Data Engineering teams to onboard new pipelines and ensure observability best practices. Integrate Datadog with tools. Conduct root cause analysis of ETL failures and performance bottlenecks. Tune thresholds, baselines, and anomaly detection settings in Datadog to reduce false positives. Document incident handling procedures and contribute to improving overall ETL monitoring maturity. Participate in on call rotations or scheduled support windows to manage ETL health. Required Skills & Qualifications 3+ years of experience in ETL/data pipeline monitoring, preferably in a cloud or hybrid environment. Proficiency in using Datadog for metrics, logging, alerting, and dashboards. Strong understanding of ETL concepts and tools (e.g., Airflow, Informatica, Talend, AWS Glue, or dbt). Familiarity with SQL and querying large datasets. Experience working with Python, Shell scripting, or Bash for automation and log parsing. Understanding of cloud platforms (AWS/GCP/Azure) and services like S3, Redshift, BigQuery, etc. Knowledge of CI/CD and DevOps principles related to data infrastructure monitoring. Preferred Qualifications Experience with distributed tracing and APM in Datadog. Prior experience monitoring Spark, Kafka, or streaming pipelines. Familiarity with ticketing tools (e.g., Jira, ServiceNow) and incident management workflows. Show more Show less

Posted 3 days ago

Apply

8.0 years

0 Lacs

Mumbai Metropolitan Region

On-site

Linkedin logo

Job Description Support the day-to-day operations of these GCP-based data pipelines, ensuring data governance, reliability, and performance optimization. Hands-on experience with GCP data services such as Dataflow, BigQuery, Dataproc, Pub/Sub, and real-time streaming architectures is preferred.The ideal candidate is an experienced data pipeline builder and data wrangler who enjoys optimizing data systems and building them from the ground up. This role requires a flexible working schedule, including potential weekend support for critical operations, while maintaining a 40-hour work week.The Data Engineer will support our software developers, database architects, data analysts and data scientists on data initiatives and will ensure optimal data delivery architecture is consistent throughout ongoing projects. They must be self-directed and comfortable supporting the data needs of multiple teams, systems and products. The right candidate will be excited by the prospect of optimizing or even re-designing our company’s data architecture to support our next generation of products and data initiatives. A key aspect of the MDLZ Google cloud BigQuery platform is handling the complexity of inbound data, which often does not follow a global design (e.g., variations in channel inventory, customer PoS, hierarchies, distribution, and promo plans). You will assist in ensuring the robust operation of pipelines that translate this varied inbound data into the standardized o9 global design. This also includes man '8+ years of overall industry experience and minimum of 8-10 years of experience building and deploying large scale data processing pipelines in a production environment Focus on excellence: Has practical experience of Data-Driven Approaches, Is familiar with the application of Data Security strategy, Is familiar with well know data engineering tools and platforms Technical depth and breadth : Able to build and operate Data Pipelines, Build and operate Data Storage, Has worked on big data architecture within Distributed Systems. Is familiar with Infrastructure definition and automation in this context. Is aware of adjacent technologies to the ones they have worked on. Can speak to the alternative tech choices to that made on their projects. Implementation and automation of Internal data extraction from SAP BW / HANA Implementation and automation of External data extraction from openly available internet data sources via APIs Data cleaning, curation and enrichment by using Alteryx, SQL, Python, R, PySpark, SparkR Preparing consolidated DataMart for use by Data Scientists and managing SQL Databases Exposing data via Alteryx, SQL Database for consumption in Tableau Data documentation maintenance/update Collaboration and workflow using a version control system (e.g., Git Hub) Learning ability : Is self-reflective, Has a hunger to improve, Has a keen interest to drive their own learning. Applies theoretical knowledge to practice Flexible Working Hours: This role requires the flexibility to work non-traditional hours, including providing support during off-hours or weekends for critical data pipeline job runs, deployments, or incident response, while ensuring the total work commitment remains a 40-hour week. Work with stakeholders including the Executive, Product, Data and Design teams to assist with data-related technical issues and support their data infrastructure needs.Data engineering Concepts: Experience in working with data lake, data warehouse, data mart and Implemented ETL/ELT and SCD concepts. ETL or Data integration tool: Experience in Talend is highly desirable. Analytics: Fluent with SQL, PL/SQL and have used analytics tools like Big Query for data analytics Cloud experience: Experienced in GCP services like cloud function, cloud run, data flow, data proc and big query. Data sources: Experience of working with structure data sources like SAP, BW, Flat Files, RDBMS etc. and semi structured data sources like PDF, JSON, XML etc. Programming: Understanding of OOPs concepts and hands-on experience with Python/Java for programming and scripting. Data Processing: Experience in working with any of the Data Processing Platforms like Dataflow, Databricks. Orchestration: Experience in orchestrating/scheduling data pipelines using any of the tools like Airflow and Alteryx Keep our data separated and secure across national boundaries through multiple data centers and Azure regions. Create data tools for analytics and data scientist team members that assist them in building and optimizing our product into an innovative industry leader. Work with data and analytics experts to strive for greater functionality in our data systems. Skills And Experience Rich experience in working with FMCG industry. Deep knowledge in manipulating, processing, and extracting value from datasets; + 5 years of experience in data engineering, business intelligence, data science, or related field; Proficiency with Programming Languages: SQL, Python, R Spark, PySpark, SparkR, SQL for data processing; Strong project management skills and ability to plan and prioritize work in a fast-paced environment; Experience with: MS Azure Data Factory, MS Azure Data Lake Store, SQL Database, SAP BW/ ECC / HANA, Alteryx, Tableau; Ability to think creatively, highly-driven and self-motivated; Knowledge of SAP BW for HANA (Extractors, Transformations, Modeling aDSOs, Queries, OpenHubs) No Relocation support available Business Unit Summary Headquartered in Singapore, Mondelēz International’s Asia, Middle East and Africa (AMEA) region is comprised of six business units, has more than 21,000 employees and operates in more than 27 countries including Australia, China, Indonesia, Ghana, India, Japan, Malaysia, New Zealand, Nigeria, Philippines, Saudi Arabia, South Africa, Thailand, United Arab Emirates and Vietnam. Seventy-six nationalities work across a network of more than 35 manufacturing plants, three global research and development technical centers and in offices stretching from Auckland, New Zealand to Casablanca, Morocco. Mondelēz International in the AMEA region is the proud maker of global and local iconic brands such as Oreo and belVita biscuits, Kinh Do mooncakes, Cadbury, Cadbury Dairy Milk and Milka chocolate, Halls candy, Stride gum, Tang powdered beverage and Philadelphia cheese. We are also proud to be named a Top Employer in many of our markets. Mondelēz International is an equal opportunity employer and all qualified applicants will receive consideration for employment without regard to race, color, religion, gender, sexual orientation or preference, gender identity, national origin, disability status, protected veteran status, or any other characteristic protected by law. Job Type Regular Analytics & Modelling Analytics & Data Science Show more Show less

Posted 3 days ago

Apply

6.0 years

0 Lacs

Mumbai Metropolitan Region

On-site

Linkedin logo

Job Description Support the day-to-day operations of these GCP-based data pipelines, ensuring data governance, reliability, and performance optimization. Hands-on experience with GCP data services such as Dataflow, BigQuery, Dataproc, Pub/Sub, and real-time streaming architectures is preferred.The ideal candidate is an experienced data pipeline builder and data wrangler who enjoys optimizing data systems and building them from the ground up. The Data Engineer will support our software developers, database architects, data analysts and data scientists on data initiatives and will ensure optimal data delivery architecture is consistent throughout ongoing projects. They must be self-directed and comfortable supporting the data needs of multiple teams, systems and products. The right candidate will be excited by the prospect of optimizing or even re-designing our company’s data architecture to support our next generation of products and data initiatives.This role requires a flexible working schedule, including potential weekend support for critical operations, while maintaining a 40-hour work week. A key aspect of the MDLZ DataHub Google BigQuery platform is handling the complexity of inbound data, which often does not follow a global design (e.g., variations in channel inventory, customer PoS, hierarchies, distribution, and promo plans). You will assist in ensuring the robust operation of pipelines that translate this varied inbound data into the standardized o9 global design. This also includes managing pipelines for different data drivers (> 6 months vs. 0-6 months), ensuring consistent input to o9. '6+ years of overall industry experience and minimum of 6-8 years of experience building and deploying large scale data processing pipelines in a production environment Focus on excellence: Has practical experience of Data-Driven Approaches, Is familiar with the application of Data Security strategy, Is familiar with well know data engineering tools and platforms Technical depth and breadth : Able to build and operate Data Pipelines, Build and operate Data Storage, Has worked on big data architecture within Distributed Systems. Is familiar with Infrastructure definition and automation in this context. Is aware of adjacent technologies to the ones they have worked on. Can speak to the alternative tech choices to that made on their projects. Implementation and automation of Internal data extraction from SAP BW / HANA Implementation and automation of External data extraction from openly available internet data sources via APIs Data cleaning, curation and enrichment by using Alteryx, SQL, Python, R, PySpark, SparkR Preparing consolidated DataMart for use by Data Scientists and managing SQL Databases Exposing data via Alteryx, SQL Database for consumption in Tableau Data documentation maintenance/update Collaboration and workflow using a version control system (e.g., Git Hub) Learning ability : Is self-reflective, Has a hunger to improve, Has a keen interest to drive their own learning. Applies theoretical knowledge to practice Build analytics tools that utilize the data pipeline to provide actionable insights into customer acquisition, operational efficiency and other key business performance metrics. Work with stakeholders including the Executive, Product, Data and Design teams to assist with data-related technical issues and support their data infrastructure needs.Data engineering Concepts: Experience in working with data lake, data warehouse, data mart and Implemented ETL/ELT and SCD concepts. ETL or Data integration tool: Experience in Talend is highly desirable. Analytics: Fluent with SQL, PL/SQL and have used analytics tools like Big Query for data analytics Cloud experience: Experienced in GCP services like cloud function, cloud run, data flow, data proc and big query. Data sources: Experience of working with structure data sources like SAP, BW, Flat Files, RDBMS etc. and semi structured data sources like PDF, JSON, XML etc. Flexible Working Hours: This role requires the flexibility to work non-traditional hours, including providing support during off-hours or weekends for critical data pipeline job runs, deployments, or incident response, while ensuring the total work commitment remains a 40-hour week. Data Processing: Experience in working with any of the Data Processing Platforms like Dataflow, Databricks. Orchestration: Experience in orchestrating/scheduling data pipelines using any of the tools like Airflow and Alteryx Keep our data separated and secure across national boundaries through multiple data centers and Azure regions. Create data tools for analytics and data scientist team members that assist them in building and optimizing our product into an innovative industry leader. Work with data and analytics experts to strive for greater functionality in our data systems. Skills And Experience Deep knowledge in manipulating, processing, and extracting value from datasets; Atleast 2 years of FMCG/CPG industry experience. + 5 years of experience in data engineering, business intelligence, data science, or related field; Proficiency with Programming Languages: SQL, Python, R Spark, PySpark, SparkR, SQL for data processing; Strong project management skills and ability to plan and prioritize work in a fast-paced environment; Experience with: MS Azure Data Factory, MS Azure Data Lake Store, SQL Database, SAP BW/ ECC / HANA, Alteryx, Tableau; Ability to think creatively, highly-driven and self-motivated; Knowledge of SAP BW for HANA (Extractors, Transformations, Modeling aDSOs, Queries, OpenHubs) No Relocation support available Business Unit Summary Headquartered in Singapore, Mondelēz International’s Asia, Middle East and Africa (AMEA) region is comprised of six business units, has more than 21,000 employees and operates in more than 27 countries including Australia, China, Indonesia, Ghana, India, Japan, Malaysia, New Zealand, Nigeria, Philippines, Saudi Arabia, South Africa, Thailand, United Arab Emirates and Vietnam. Seventy-six nationalities work across a network of more than 35 manufacturing plants, three global research and development technical centers and in offices stretching from Auckland, New Zealand to Casablanca, Morocco. Mondelēz International in the AMEA region is the proud maker of global and local iconic brands such as Oreo and belVita biscuits, Kinh Do mooncakes, Cadbury, Cadbury Dairy Milk and Milka chocolate, Halls candy, Stride gum, Tang powdered beverage and Philadelphia cheese. We are also proud to be named a Top Employer in many of our markets. Mondelēz International is an equal opportunity employer and all qualified applicants will receive consideration for employment without regard to race, color, religion, gender, sexual orientation or preference, gender identity, national origin, disability status, protected veteran status, or any other characteristic protected by law. Job Type Regular Analytics & Modelling Analytics & Data Science Show more Show less

Posted 3 days ago

Apply

5.0 years

0 Lacs

Mumbai Metropolitan Region

On-site

Linkedin logo

Job Description Looking for a savvy Data Engineer to join team of Modeling / Architect experts. The hire will be responsible for expanding and optimizing our data and data pipeline architecture, as well as optimizing data flow and collection for cross functional teams. The ideal candidate is an experienced data pipeline builder and data wrangler who enjoys optimizing data systems and building them from the ground up. The Data Engineer will support our software developers, database architects, data analysts and data scientists on data initiatives and will ensure optimal data delivery architecture is consistent throughout ongoing projects. They must be self-directed and comfortable supporting the data needs of multiple teams, systems and products. The right candidate will be excited by the prospect of optimizing or even re-designing our company’s data architecture to support our next generation of products and data initiatives.This role requires a flexible working schedule, including potential weekend support for critical operations, while maintaining a 40-hour work week. In this role, you will assist in maintaining the MDLZ DataHub Google BigQuery data pipelines and corresponding platforms (on-prem and cloud), working closely with global teams on DataOps initiatives. The D4GV platform spans across three key GCP instances: NALA, MEU, and AMEA, supporting the global rollout of o9 across all Mondelēz BUs over the next three years 5+ years of overall industry experience and minimum of 2-4 years of experience building and deploying large scale data processing pipelines in a production environment Focus on excellence: Has practical experience of Data-Driven Approaches, Is familiar with the application of Data Security strategy, Is familiar with well know data engineering tools and platforms Technical depth and breadth : Able to build and operate Data Pipelines, Build and operate Data Storage, Has worked on big data architecture within Distributed Systems. Is familiar with Infrastructure definition and automation in this context. Is aware of adjacent technologies to the ones they have worked on. Can speak to the alternative tech choices to that made on their projects. Implementation and automation of Internal data extraction from SAP BW / HANA Implementation and automation of External data extraction from openly available internet data sources via APIs Data cleaning, curation and enrichment by using Alteryx, SQL, Python, R, PySpark, SparkR Data ingestion and management in Hadoop / Hive Preparing consolidated DataMart for use by Data Scientists and managing SQL Databases Exposing data via Alteryx, SQL Database for consumption in Tableau Data documentation maintenance/update Collaboration and workflow using a version control system (e.g., Git Hub) Learning ability : Is self-reflective, Has a hunger to improve, Has a keen interest to drive their own learning. Applies theoretical knowledge to practice Build analytics tools that utilize the data pipeline to provide actionable insights into customer acquisition, operational efficiency and other key business performance metrics. Work with stakeholders including the Executive, Product, Data and Design teams to assist with data-related technical issues and support their data infrastructure needs. Flexible Working Hours: This role requires the flexibility to work non-traditional hours, including providing support during off-hours or weekends for critical data pipeline job runs, deployments, or incident response, while ensuring the total work commitment remains a 40-hour week. Create data tools for analytics and data scientist team members that assist them in building and optimizing our product into an innovative industry leader. Work with data and analytics experts to strive for greater functionality in our data systems. Skills And Experience Deep knowledge in manipulating, processing, and extracting value from datasets; support the day-to-day operations of these GCP-based data pipelines, ensuring data governance, reliability, and performance optimization. Hands-on experience with GCP data services such as Dataflow, BigQuery, Dataproc, Pub/Sub, and real-time streaming architectures is preferred. + 5 years of experience in data engineering, business intelligence, data science, or related field; Proficiency with Programming Languages: SQL, Python, R Spark, PySpark, SparkR, SQL for data processing; Strong project management skills and ability to plan and prioritize work in a fast-paced environment; Experience with: MS Azure Data Factory, MS Azure Data Lake Store, SQL Database, SAP BW/ ECC / HANA, Alteryx, Tableau; Ability to think creatively, highly-driven and self-motivated; Knowledge of SAP BW for HANA (Extractors, Transformations, Modeling aDSOs, Queries, OpenHubs) No Relocation support available Business Unit Summary Headquartered in Singapore, Mondelēz International’s Asia, Middle East and Africa (AMEA) region is comprised of six business units, has more than 21,000 employees and operates in more than 27 countries including Australia, China, Indonesia, Ghana, India, Japan, Malaysia, New Zealand, Nigeria, Philippines, Saudi Arabia, South Africa, Thailand, United Arab Emirates and Vietnam. Seventy-six nationalities work across a network of more than 35 manufacturing plants, three global research and development technical centers and in offices stretching from Auckland, New Zealand to Casablanca, Morocco. Mondelēz International in the AMEA region is the proud maker of global and local iconic brands such as Oreo and belVita biscuits, Kinh Do mooncakes, Cadbury, Cadbury Dairy Milk and Milka chocolate, Halls candy, Stride gum, Tang powdered beverage and Philadelphia cheese. We are also proud to be named a Top Employer in many of our markets. Mondelēz International is an equal opportunity employer and all qualified applicants will receive consideration for employment without regard to race, color, religion, gender, sexual orientation or preference, gender identity, national origin, disability status, protected veteran status, or any other characteristic protected by law. Job Type Regular Analytics & Modelling Analytics & Data Science Show more Show less

Posted 3 days ago

Apply

1.0 years

0 Lacs

Noida, Uttar Pradesh, India

On-site

Linkedin logo

At TVL Media, we specialize in driving innovative digital strategies and creative storytelling that captivates, converts, and builds lasting brand equity. We’re on a mission to elevate brands through powerful content and data-backed digital marketing strategies. If you're a passionate writer who can craft compelling content across channels and formats, this is your chance to grow with a fast-paced, creative team. We are looking for a Content Writer who thrives in the digital world and knows how to turn ideas into impactful content across platforms. The ideal candidate will have a solid grasp of content strategy, digital storytelling, and platform-optimized writing, with experience producing blog posts, LinkedIn content, carousels, ebooks, and more. Key Responsibilities Content Creation & Strategy Write engaging blog posts tailored for SEO and reader value. Craft LinkedIn posts and carousels that spark engagement and build authority. Research, outline, and develop long-form content such as ebooks and whitepapers. Collaborate with designers to shape content for visual platforms (social media, carousels, infographics). Digital Marketing Alignment Work closely with the marketing team to support campaigns with aligned messaging. Develop persuasive copy for landing pages, email marketing, and paid ads. Stay updated with digital marketing trends, tools, and tone. Content Optimization Use SEO best practices, tools (like Surfer SEO, Clearscope, or SEMrush), and analytics to optimize performance. Conduct keyword research and implement strategies to boost search visibility. Ensure consistency in brand voice and adherence to content guidelines. Cross-functional Collaboration Coordinate with social media managers, designers, and campaign leads. Attend brainstorming sessions and contribute ideas for new formats and series. Qualifications Minimum 1 year of proven experience in content writing, preferably in a digital marketing or agency setup. Excellent command of English (written and verbal). Portfolio demonstrating versatility across blogs, ebooks, LinkedIn posts, carousels, and more. Working knowledge of content management systems (e.g., WordPress), SEO tools, and basic analytics. Ability to adapt tone and style based on target audiences and platforms. About Company: TVL Media is a values-driven digital marketing agency dedicated to empowering our customers. Over the years, we have worked with Fortune 100s and brand-new startups. We help ambitious businesses like yours generate more profits by building awareness, driving web traffic, connecting with customers, and growing overall sales. Show more Show less

Posted 3 days ago

Apply

5.0 - 10.0 years

18 - 24 Lacs

Bangalore Rural

Work from Office

Naukri logo

Responsibilities: Design, develop & maintain big data solutions using Spark, Scala & Apache tools. Optimize performance through data modeling & query optimization techniques. Annual bonus Provident fund

Posted 3 days ago

Apply

0 years

0 Lacs

Pune, Maharashtra, India

On-site

Linkedin logo

BigData oracle PySpark Experience in SQL and understanding of ETL best practices Should have good hands on in ETL/Big Data development Extensive hands on experience in Scala Should have experience in Spark/Yarn, troubleshooting Spark, Linux, Python Setting up a Hadoop cluster, Backup, recovery, and maintenance. Show more Show less

Posted 3 days ago

Apply

0.0 - 31.0 years

0 - 0 Lacs

Kalwa, Thane

Remote

Apna logo

Job Title: Kids Play Zone Attendant Location: Parsik Nagar, 90ft Road Kalwa About Us: Ladobach Ghar Kids Play Zone & Party is a vibrant and engaging indoor play area dedicated to providing a safe, fun, and stimulating environment for children of all ages. We offer a variety of play structures, activities, and events designed to spark imagination and encourage active play. We are looking for enthusiastic individuals to join our team and help us create memorable experiences for our young visitors and their families. Job Summary: The Kids Play Zone Attendant is a key member of our team, responsible for ensuring the safety and well-being of children within the play area, providing excellent customer service to parents and guardians, and assisting with day-to-day operations using basic computer skills. This role requires a nurturing personality, strong communication abilities, and a proactive approach to maintaining a clean and organized environment. Key Responsibilities: Child Supervision & Safety (Primary Focus): Actively monitor children within the play zone to ensure their safety and adherence to play area rules. Intervene promptly and appropriately in any potentially unsafe situations or conflicts between children. Provide basic first aid for minor injuries (training will be provided). Maintain a watchful eye on entrance and exit points to prevent unauthorized access or children leaving unattended. Engage with children in a positive and encouraging manner, facilitating play and interaction. Assist children with using play equipment safely and appropriately. Customer Service & Parent Interaction: Warmly welcome and greet all customers (children and adults) upon arrival. Provide clear and concise information about play zone rules, activities, and pricing. Assist customers with inquiries, concerns, and special requests in a professional and friendly manner. Handle registration and check-out procedures efficiently. Address parent concerns regarding their child's well-being or behavior with empathy and discretion. Maintain a positive and approachable demeanor at all times. Operational & Computer Knowledge: Operate point-of-sale (POS) systems for ticket sales, merchandise, and food/beverage purchases. Manage customer registrations and bookings using basic computer software (e.g., spreadsheet programs, booking systems). Maintain accurate records of attendance and transactions. Assist with opening and closing procedures, including light cleaning and tidying of the play area. Monitor and report any equipment malfunctions or maintenance needs. Basic data entry and report generation as required. Cleanliness & Maintenance: Regularly inspect and ensure the cleanliness of the play area, including play structures, seating areas, and restrooms. Sanitize equipment and toys periodically to maintain a hygienic environment. Assist with restocking supplies (e.g., first aid, cleaning materials). Qualifications: Experience: Previous experience working with children (e.g., childcare, babysitting, teaching assistant, summer camp counselor) is highly preferred. Experience in a customer-facing role is a plus. Skills:Excellent communication and interpersonal skills, with the ability to interact effectively with children, parents, and colleagues. A genuine love for working with children and a patient, nurturing demeanor. Basic computer proficiency, including comfort with using POS systems, email, and basic office software. Strong observational skills and attention to detail, especially regarding child safety. Ability to work independently and as part of a team. Problem-solving skills and the ability to remain calm in a fast-paced environment. Education: 12th Pass, Graduate Timing: 11AM to 8PM Availability: Must be available to work flexible hours, including evenings, weekends, and holidays. Weekly Holiday, Sat-Sun Working Physical Requirements: Ability to stand, walk, bend, stoop, and lift up to [e.g., 10-15 kg] occasionally. Ability to actively move around the play area and engage with children at their level.

Posted 3 days ago

Apply

5.0 years

0 Lacs

India

On-site

Linkedin logo

Our technology services client is seeking multiple Data Analytics with SQL, Databricks, ADF to join their team on a contract basis. These positions offer a strong potential for conversion to full-time employment upon completion of the initial contract period. Below are further details about the role: Role: Data Analytics with SQL, Databricks, ADF Mandatory Skills: SQL, Databricks, ADF Experience: 5-7 Years Location: Pan India Notice Period: Immediate- 15 Days Required Qualifications: 5 years of software solution development using agile, DevOps, product model that includes designing, developing, and implementing large-scale applications or data engineering solutions. 5+ years of Data Analytics experience using SQL 5+ years full-stack development experience, preferably in Azure 5+ years of cloud development (prefer Microsoft Azure) including Azure EventHub, Azure Data Factory, Azure Functions, ADX, ASA, Azure Databricks, Azure DevOps, Azure Blob Storage, Azure Power Apps, and Power BI. 1+ years of FAST API experience is a plus Airline Industry Experience Expertise with the Azure Technology stack for data management, data ingestion, capture, processing, curation and creating consumption layers. Azure Development Track Certification (preferred) Spark Certification (preferred) If you are interested, kindly share the updated resume to Sathwik@s3staff.com Show more Show less

Posted 3 days ago

Apply

0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Linkedin logo

Join us as a Data Engineer This is an exciting opportunity to use your technical expertise to collaborate with colleagues and build effortless, digital first customer experiences You’ll be simplifying the bank by developing innovative data driven solutions, using insight to be commercially successful, and keeping our customers’ and the bank’s data safe and secure Participating actively in the data engineering community, you’ll deliver opportunities to support the bank’s strategic direction while building your network across the bank We're offering this role at associate level What you'll do As a Data Engineer, you’ll play a key role in driving value for our customers by building data solutions. You’ll be carrying out data engineering tasks to build, maintain, test and optimise a scalable data architecture, as well as carrying out data extractions, transforming data to make it usable to data analysts and scientists, and loading data into data platforms. You’ll Also Be Developing comprehensive knowledge of the bank’s data structures and metrics, advocating change where needed for product development Practicing DevOps adoption in the delivery of data engineering, proactively performing root cause analysis and resolving issues Collaborating closely with core technology and architecture teams in the bank to build data knowledge and data solutions Developing a clear understanding of data platform cost levers to build cost effective and strategic solutions Sourcing new data using the most appropriate tooling and integrating it into the overall solution to deliver for our customers The skills you'll need To be successful in this role, you’ll need five plus years of good understanding of data usage and dependencies with wider teams and the end customer, as well as experience of extracting value and features from large scale data. You'll also perform database migrations from soon-to-be decommissioned platforms onto strategic analytical platforms in a controlled and structured manner. You’ll Also Demonstrate Experience of tableau, PowerBI, Snowflake, PostGres, MongoDB, Python, Spark, Autosys, Airflow Experience of using programming languages alongside knowledge of data and software engineering fundamentals Experience in AWS cloud eco systems Strong communication skills with the ability to proactively engage with a wide range of stakeholders Show more Show less

Posted 3 days ago

Apply

3.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

Description As part of the Last Mile Science & Technology organization, you’ll partner closely with Product Managers, Data Scientists, and Software Engineers to drive improvements in Amazon's Last Mile delivery network. You will leverage data and analytics to generate insights that accelerate the scale, efficiency, and quality of the routes we build for our drivers through our end-to-end last mile planning systems. You will develop complex data engineering solutions using AWS technology stack (S3, Glue, IAM, Redshift, Athena). You should have deep expertise and passion in working with large data sets, building complex data processes, performance tuning, bringing data from disparate data stores and programmatically identifying patterns. You will work with business owners to develop and define key business questions and requirements. You will provide guidance and support for other engineers with industry best practices and direction. Analytical ingenuity and leadership, business acumen, effective communication capabilities, and the ability to work effectively with cross-functional teams in a fast-paced environment are critical skills for this role. Key job responsibilities Design, implement, and support data warehouse / data lake infrastructure using AWS big data stack, Python, Redshift, Quicksight, Glue/lake formation, EMR/Spark/Scala, Athena etc. Extract huge volumes of structured and unstructured data from various sources (Relational /Non-relational/No-SQL database) and message streams and construct complex analyses. Develop and manage ETLs to source data from various systems and create unified data model for analytics and reporting Perform detailed source-system analysis, source-to-target data analysis, and transformation analysis Participate in the full development cycle for ETL: design, implementation, validation, documentation, and maintenance. Basic Qualifications 3+ years of data engineering experience Experience with data modeling, warehousing and building ETL pipelines Experience with one or more scripting language (e.g., Python, KornShell) 3+ years of analyzing and interpreting data with Redshift, Oracle, NoSQL etc. experience Preferred Qualifications Experience with big data technologies such as: Hadoop, Hive, Spark, EMR Experience with big data processing technology (e.g., Hadoop or ApacheSpark), data warehouse technical architecture, infrastructure components, ETL, and reporting/analytic tools and environments Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace accommodation or adjustment during the application and hiring process, including support for the interview or onboarding process, please visit https://amazon.jobs/content/en/how-we-hire/accommodations for more information. If the country/region you’re applying in isn’t listed, please contact your Recruiting Partner. Company - ADCI HYD 13 SEZ Job ID: A3009499 Show more Show less

Posted 3 days ago

Apply

5.0 - 7.0 years

15 - 25 Lacs

Noida, Hyderabad

Work from Office

Naukri logo

Streaming data Technical skills requirements :- Experience- 5+ Years Solid hands-on and Solution Architecting experience in Big-Data Technologies (AWS Mandatory) - Hands on experience in: AWS Dynamo DB, EKS, Kafka, Kinesis, Glue, EMR - Hands-on experience of programming language like Scala with Spark. - Good command and working experience on Hadoop Map Reduce, HDFS, Hive, HBase, and/or No-SQL Databases - Hands on working experience on any of the data engineering analytics platform (Hortonworks Cloudera MapR AWS), AWS preferred - Hands-on experience on Data Ingestion Apache Nifi, Apache Airflow, Sqoop, and Oozie - Hands on working experience of data processing at scale with event driven systems, message queues (Kafka FlinkSpark Streaming) - Hands on working Experience with AWS Services like EMR, Kinesis, S3, CloudFormation, Glue, API Gateway, Lake Foundation - Hands on working Experience with AWS Athena - Data Warehouse exposure on Apache Nifi, Apache Airflow, Kylo - Operationalization of ML models on AWS (e.g. deployment, scheduling, model monitoring etc.) Mandatory Skills- AWS. Spark, Scala, Hadoop (Big Data) Hands on experience in following- - Feature Engineering Data Processing to be used for Model development - Experience gathering and processing raw data at scale (including writing scripts, web scraping, calling APIs, write SQL queries, etc.) - Experience building data pipelines for structured unstructured, real-time batch, events synchronous asynchronous using MQ, Kafka, Steam processing. - Hands-on working experience in analysing source system data and data flows, working with structured and unstructured data - Must be very strong in writing SQL queries - Strengthen the Data engineering team with Big Data solutions - Strong technical, analytical, and problem-solving skills.

Posted 3 days ago

Apply

5.0 - 8.0 years

10 - 20 Lacs

Pune

Hybrid

Naukri logo

Job Summary: We are seeking an experienced Big Data Engineer to join our dynamic team. The ideal candidate will have strong expertise in Spark, Scala, Hadoop, and SQL , with a proven track record of building scalable data pipelines and delivering high-performance data solutions. Mandatory Skills: Strong experience in Big Data technologies Apache Spark (Core, SQL, DataFrames, RDD) Scala programming (Hands-on expertise) Hadoop Ecosystem (HDFS, MapReduce, YARN, Hive, HBase) SQL (Advanced querying, optimization, joins, aggregations) Data Ingestion & Processing : ETL development and real-time data streaming (Kafka desirable) Working knowledge of distributed computing concepts Good to Have: Experience with Cloud platforms (AWS, Azure, or GCP) Familiarity with Data Warehousing concepts Exposure to Python/Java for scripting purposes CI/CD practices for data pipeline deployments Soft Skills: Excellent problem-solving and analytical skills Strong communication and collaboration abilities Ability to work in Agile development environments

Posted 3 days ago

Apply

12.0 - 20.0 years

35 - 40 Lacs

Navi Mumbai

Work from Office

Naukri logo

Position Overview: We are seeking a skilled Big Data Developer to join our growing delivery team, with a dual focus on hands-on project support and mentoring junior engineers. This role is ideal for a developer who not only thrives in a technical, fast-paced environment but is also passionate about coaching and developing the next generation of talent. You will work on live client projects, provide technical support, contribute to solution delivery, and serve as a go-to technical mentor for less experienced team members. Key Responsibilities: Perform hands-on Big Data development work, including coding, testing, troubleshooting, and deploying solutions. Support ongoing client projects, addressing technical challenges and ensuring smooth delivery. Collaborate with junior engineers to guide them on coding standards, best practices, debugging, and project execution. Review code and provide feedback to junior engineers to maintain high quality and scalable solutions. Assist in designing and implementing solutions using Hadoop, Spark, Hive, HDFS, and Kafka. Lead by example in object-oriented development, particularly using Scala and Java. Translate complex requirements into clear, actionable technical tasks for the team. Contribute to the development of ETL processes for integrating data from various sources. Document technical approaches, best practices, and workflows for knowledge sharing within the team. Required Skills and Qualifications: 8+ years of professional experience in Big Data development and engineering. Strong hands-on expertise with Hadoop, Hive, HDFS, Apache Spark, and Kafka. Solid object-oriented development experience with Scala and Java. Strong SQL skills with experience working with large data sets. Practical experience designing, installing, configuring, and supporting Big Data clusters. Deep understanding of ETL processes and data integration strategies. Proven experience mentoring or supporting junior engineers in a team setting. Strong problem-solving, troubleshooting, and analytical skills. Excellent communication and interpersonal skills. Preferred Qualifications: Professional certifications in Big Data technologies (Cloudera, Databricks, AWS Big Data Specialty, etc.). Experience with cloud Big Data platforms (AWS EMR, Azure HDInsight, or GCP Dataproc). Exposure to Agile or DevOps practices in Big Data project environments. What We Offer: Opportunity to work on challenging, high-impact Big Data projects. Leadership role in shaping and mentoring the next generation of engineers. Supportive and collaborative team culture. Flexible working environment Competitive compensation and professional growth opportunities.

Posted 3 days ago

Apply

3.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

Description Design, implement, and support data warehouse / data lake infrastructure using AWS big data stack, Python, Redshift, Quicksight, Glue/lake formation, EMR/Spark/Scala, Athena etc. Extract huge volumes of structured and unstructured data from various sources (Relational /Non-relational/No-SQL database) and message streams and construct complex analyses. Develop and manage ETLs to source data from various systems and create unified data model for analytics and reporting Perform detailed source-system analysis, source-to-target data analysis, and transformation analysis Participate in the full development cycle for ETL: design, implementation, validation, documentation, and maintenance. Basic Qualifications 3+ years of data engineering experience Experience with data modeling, warehousing and building ETL pipelines 4+ years of SQL experience Experience in at least one modern scripting or programming language, such as Python, Java, Scala, or NodeJS Experience as a data engineer or related specialty (e.g., software engineer, business intelligence engineer, data scientist) with a track record of manipulating, processing, and extracting value from large datasets Preferred Qualifications Experience with AWS technologies like Redshift, S3, AWS Glue, EMR, Kinesis, FireHose, Lambda, and IAM roles and permissions Experience with non-relational databases / data stores (object storage, document or key-value stores, graph databases, column-family databases) Experience building/operating highly available, distributed systems of data extraction, ingestion, and processing of large data sets Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace accommodation or adjustment during the application and hiring process, including support for the interview or onboarding process, please visit https://amazon.jobs/content/en/how-we-hire/accommodations for more information. If the country/region you’re applying in isn’t listed, please contact your Recruiting Partner. Company - ADCI HYD 13 SEZ Job ID: A3009501 Show more Show less

Posted 3 days ago

Apply

0 years

0 Lacs

Pune, Maharashtra, India

On-site

Linkedin logo

Job Summary JOB DESCRIPTION We are seeking an experienced and innovative Data Scientist to join our team. The ideal candidate will leverage data-driven insights to solve complex problems, optimize business processes, and contribute to strategic decision-making. This role requires expertise in statistical analysis, machine learning, and data visualization to extract valuable insights from large datasets. Responsibilities Key Responsibilities: Collect, clean, and preprocess structuredandunstructureddata from various sources. Apply statisticalmethods and machinelearningalgorithms to analyze data and identify patterns. Develop predictive and prescriptive models to support business goals. Collaborate with stakeholders to define data-driven solutions for business challenges. Visualize data insights using tools like PowerBI , Tableau , or Matplotlib . Perform A / Btesting and evaluate model accuracy using appropriate metrics. Optimize machine learning models for scalability and performance. Document processes and communicate findings to non-technical stakeholders. Stay updated with advancements in data science techniques and tools. Qualifications Required Skills and Qualifications: Proficiency in programming languages like Python , R , or Scala . Strong knowledge of machinelearningframeworks such as TensorFlow , PyTorch , or Scikit − learn . Experience with SQL and NoSQLdatabases for data querying and manipulation. Understanding of bigdatatechnologies like Hadoop , Spark , or Kafka . Ability to perform statisticalanalysis and interpret results. Experience with datavisualizationlibraries like Seaborn , Plotly , or D 3. js . Excellent problem-solving and analytical skills. Strong communication skills to present findings to technical and non-technical audiences. Preferred Qualifications Master's or PhD in DataScience , Statistics , ComputerScience , or a related field. Experience with cloudplatforms (e.g., AWS, Azure, GCP) for data processing and model deployment. Knowledge of NLP ( NaturalLanguageProcessing ) and computervision . Familiarity with DevOpspractices and containerizationtools like Docker and Kubernetes . Exposure to time − seriesanalysis and forecastingtechniques . Certification in data science or machine learning tools is a plus. About Us ABOUT US Bristlecone is the leading provider of AI-powered application transformation services for the connected supply chain. We empower our customers with speed, visibility, automation, and resiliency – to thrive on change. Our transformative solutions in Digital Logistics, Cognitive Manufacturing, Autonomous Planning, Smart Procurement and Digitalization are positioned around key industry pillars and delivered through a comprehensive portfolio of services spanning digital strategy, design and build, and implementation across a range of technology platforms. Bristlecone is ranked among the top ten leaders in supply chain services by Gartner. We are headquartered in San Jose, California, with locations across North America, Europe and Asia, and over 2,500 consultants. Bristlecone is part of the $19.4 billion Mahindra Group. Equal Opportunity Employer Bristlecone is an equal opportunity employer. All applicants will be considered for employment without attention to race, color, religion, sex, sexual orientation, gender identity, national origin, veteran or disability status . Information Security Responsibilities Understand and adhere to Information Security policies, guidelines and procedure, practice them for protection of organizational data and Information System. Take part in information security training and act while handling information. Report all suspected security and policy breach to InfoSec team or appropriate authority (CISO). Understand and adhere to the additional information security responsibilities as part of the assigned job role. Show more Show less

Posted 3 days ago

Apply

0 years

0 Lacs

Gurgaon, Haryana, India

On-site

Linkedin logo

Our Purpose Mastercard powers economies and empowers people in 200+ countries and territories worldwide. Together with our customers, we’re helping build a sustainable economy where everyone can prosper. We support a wide range of digital payments choices, making transactions secure, simple, smart and accessible. Our technology and innovation, partnerships and networks combine to deliver a unique set of products and services that help people, businesses and governments realize their greatest potential. Title And Summary Senior Analyst, Big Data Analytics & Engineering Overview The Services Portfolio Management team is looking for a Senior Analyst, Big Data Analytics & Engineering to build a new product that will serve actionable insights to all Programs under Mastercard Services Portfolio. The ideal candidate should have a proven ability to analyze large data sets and effectively communicate their findings. They must have prior experience in product development, be highly motivated, innovative, intellectually curious, analytical, and possess an entrepreneurial mindset. Role Build a solution stack for a dashboard including front end in PowerBI/Tableau, data integrations, database models, ETL jobs, etc. Identify opportunities to introduce Automation and AI tools into workflows. Translate product requirements into tangible technical solution specifications and high quality, on time deliverables. Partner with other automation specialists across Services I&E to learn and build best practices for building and running the Portfolio Cockpit tool. Identify gaps and conceptualize new product/platform capabilities as required. Proactively identify automation opportunities All About You Experience with data analysis, with a background in building KPIs and reporting. PowerBI experience preferred or other reporting tools like Tableau, DOMO. Experience with PowerApps or other No/Low-code app development tools is a plus Experience in systems analysis and application design and development. Ability to deliver technology products/services in a high growth environment where priorities change rapidly. Proactive self-starter seeking initiatives for advancement. Understanding of data architecture and some experience in building logical/conceptual data models or creating data mapping documentation. Experience with data validation, quality control, and cleansing processes for new and existing data sources. Strong problem-solving, quantitative, and analytical skills. Advanced SQL skills, ability to write optimized queries for large data sets. Exposure to Python, Scala, Spark, Cloud, and other related technologies is advantageous. In-depth technical knowledge, and ability to learn new technologies. Attention to detail and quality. Team player with effective communication skills. Must be able to interact with management, internal stakeholders, and collect requirements. Must be able to perform in a team, use judgment, and operate under ambiguity. Experience in leveraging generative AI tools to enhance day-to-day tasks is beneficial. Corporate Security Responsibility All activities involving access to Mastercard assets, information, and networks comes with an inherent risk to the organization and, therefore, it is expected that every person working for, or on behalf of, Mastercard is responsible for information security and must: Abide by Mastercard’s security policies and practices; Ensure the confidentiality and integrity of the information being accessed; Report any suspected information security violation or breach, and Complete all periodic mandatory security trainings in accordance with Mastercard’s guidelines. R-251036 Show more Show less

Posted 3 days ago

Apply

2.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Linkedin logo

About Yulu Yulu is India’s largest shared electric mobility-as-a-service company. Yulu’s mission is to reduce traffic congestion and air pollution by running smart, shared, and small-sized electric vehicles. Yulu is led by a mission-driven & seasoned founding team and has won several prestigious awards for its impact and innovation. Yulu is currently enabling daily commuters for short-distance movements and helping gig-workers to deliver goods for the last mile with its eco-friendly rides at pocket-friendly prices, and reducing the carbon footprint. Yulu is excited to welcome people with high integrity, commitment, the ability to collaborate and take ownership, high curiosity, and an appetite for taking intelligent risks. If our mission brings a spark into your eyes and if you’d like to join a passionate team that’s committed to transforming how people commute, work, and explore their cities - Come, join the #Unstoppable Yulu tribe! Stay updated on the latest news from Yulu at https://www.yulu.bike/newsroom and on our website, https://www.yulu.bike/. What you’ll do? Your experience speaks volumes: You have 2+ years of hands-on experience in product design, specifically for mobile and web platforms, with a strong portfolio of shipped products. You have a user-first approach: You believe in human-centred design, conducting research, usability testing, and iterating based on real user feedback to refine your work. You have strategic & data-driven thinking: You don’t just design; you solve problems by defining the right challenges, leveraging data insights, and crafting scalable, impactful solutions. You have a collaborative mindset: You thrive in cross-functional teams, working closely with engineers, product managers, and researchers to create user-centric, business-aligned designs. You passionately design with Zen principles: You craft simple, balanced, and intuitive experiences that evoke deep, visceral emotions in our users at every interaction. Who you are? You will take full ownership of your work, ensuring every detail is meticulously crafted—from initial sketches to high-fidelity final designs. You will move fast to generate multiple concepts and prototypes, knowing when to explore further and when to pivot to a new approach based on user testing and feedback. You will collaborate closely with engineers, product managers, and stakeholders to align design strategies with business goals and technical feasibility. You will consider existing insights, technical constraints, business needs, and platform demands to create informed, data-driven solutions. You will play a crucial role in fostering a collaborative, high-performing design culture. We assure you Be a part of an innovative company that values professional growth, trustworthy colleagues, a fun environment in the office, and well-being for employees Work on impactful HR strategies that directly shape the workforce and make positive contributions to the business A culture that fosters growth, integrity, and innovation Show more Show less

Posted 3 days ago

Apply

8.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

About The Role As a Sr. Data Engineer in the Sales Automation Engineering team you should be able to work through the different areas of Data Engineering & Data Architecture including the following: Data Migration - From Hive/other DBs to Salesforce/other DBs and vice versa Data Modeling - Understand existing sources & data models and identify the gaps and building future state architecture Data Pipelines - Building Data Pipelines for several Data Mart/Data Warehouse and Reporting requirements Data Governance - Build the framework for DG & Data Quality Profiling & Reporting What The Candidate Will Need / Bonus Points ---- What the Candidate Will Do ---- Demonstrate strong knowledge of and ability to operationalize, leading data technologies and best practices. Collaborate with internal business units and data teams on business requirements, data access, processing/transformation and reporting needs and leverage existing and new tools to provide solutions. Build dimensional data models to support business requirements and reporting needs. Design, build and automate the deployment of data pipelines and applications to support reporting and data requirements. Research and recommend technologies and processes to support rapid scale and future state growth initiatives from the data front. Prioritize business needs, leadership questions, and ad-hoc requests for on-time delivery. Collaborate on architecture and technical design discussions to identify and evaluate high impact process initiatives. Work with the team to implement data governance, access control and identify and reduce security risks. Perform and participate in code reviews, peer inspections and technical design/specifications. Develop performance metrics to establish process success and work cross-functionally to consistently and accurately measure success over time Delivers measurable business process improvements while re-engineering key processes and capabilities and maps to future-state vision Prepare documentations and specifications on detailed design. Be able to work in a globally distributed team in an Agile/Scrum approach. Basic Qualifications Bachelor's Degree in computer science or similar technical field of study or equivalent practical experience. 8+ years professional software development experience, including experience in the Data Engineering & Architecture space Interact with product managers, and business stakeholders to understand data needs and help build data infrastructure that scales across the company Very strong SQL skills - know advanced level SQL coding (windows functions, CTEs, dynamic variables, Hierarchical queries, Materialized views etc) Experience with data-driven architecture and systems design knowledge of Hadoop related technologies such as HDFS, Apache Spark, Apache Flink, Hive, and Presto. Good hands on experience with Object Oriented programming languages like Python. Proven experience in large-scale distributed storage and database systems (SQL or NoSQL, e.g. HIVE, MySQL, Cassandra) and data warehousing architecture and data modeling. Working experience in cloud technologies like GCP, AWS, Azure Knowledge of reporting tools like Tableau and/or other BI tools. Preferred Qualifications Python libraries (Apache spark, Scala) Working experience in cloud technologies like GCP, AWS, Azure Show more Show less

Posted 3 days ago

Apply

10.0 - 20.0 years

35 - 40 Lacs

Navi Mumbai

Work from Office

Naukri logo

Position Overview: We are seeking a skilled Big Data Developer to join our growing delivery team, with a dual focus on hands-on project support and mentoring junior engineers. This role is ideal for a developer who not only thrives in a technical, fast-paced environment but is also passionate about coaching and developing the next generation of talent. You will work on live client projects, provide technical support, contribute to solution delivery, and serve as a go-to technical mentor for less experienced team members. Key Responsibilities: Perform hands-on Big Data development work, including coding, testing, troubleshooting, and deploying solutions. Support ongoing client projects, addressing technical challenges and ensuring smooth delivery. Collaborate with junior engineers to guide them on coding standards, best practices, debugging, and project execution. Review code and provide feedback to junior engineers to maintain high quality and scalable solutions. Assist in designing and implementing solutions using Hadoop, Spark, Hive, HDFS, and Kafka. Lead by example in object-oriented development, particularly using Scala and Java. Translate complex requirements into clear, actionable technical tasks for the team. Contribute to the development of ETL processes for integrating data from various sources. Document technical approaches, best practices, and workflows for knowledge sharing within the team. Required Skills and Qualifications: 8+ years of professional experience in Big Data development and engineering. Strong hands-on expertise with Hadoop, Hive, HDFS, Apache Spark, and Kafka. Solid object-oriented development experience with Scala and Java. Strong SQL skills with experience working with large data sets. Practical experience designing, installing, configuring, and supporting Big Data clusters. Deep understanding of ETL processes and data integration strategies. Proven experience mentoring or supporting junior engineers in a team setting. Strong problem-solving, troubleshooting, and analytical skills. Excellent communication and interpersonal skills. Preferred Qualifications: Professional certifications in Big Data technologies (Cloudera, Databricks, AWS Big Data Specialty, etc.). Experience with cloud Big Data platforms (AWS EMR, Azure HDInsight, or GCP Dataproc). Exposure to Agile or DevOps practices in Big Data project environments. What We Offer: Opportunity to work on challenging, high-impact Big Data projects. Leadership role in shaping and mentoring the next generation of engineers. Supportive and collaborative team culture. Flexible working environment Competitive compensation and professional growth opportunities.

Posted 3 days ago

Apply

0 years

0 Lacs

Pune, Maharashtra, India

On-site

Linkedin logo

Our Purpose Mastercard powers economies and empowers people in 200+ countries and territories worldwide. Together with our customers, we’re helping build a sustainable economy where everyone can prosper. We support a wide range of digital payments choices, making transactions secure, simple, smart and accessible. Our technology and innovation, partnerships and networks combine to deliver a unique set of products and services that help people, businesses and governments realize their greatest potential. Title And Summary Senior Analyst, Big Data Analytics & Engineering Overview The Services Portfolio Management team is looking for a Senior Analyst, Big Data Analytics & Engineering to build a new product that will serve actionable insights to all Programs under Mastercard Services Portfolio. The ideal candidate should have a proven ability to analyze large data sets and effectively communicate their findings. They must have prior experience in product development, be highly motivated, innovative, intellectually curious, analytical, and possess an entrepreneurial mindset. Role Build a solution stack for a dashboard including front end in PowerBI/Tableau, data integrations, database models, ETL jobs, etc. Identify opportunities to introduce Automation and AI tools into workflows. Translate product requirements into tangible technical solution specifications and high quality, on time deliverables. Partner with other automation specialists across Services I&E to learn and build best practices for building and running the Portfolio Cockpit tool. Identify gaps and conceptualize new product/platform capabilities as required. Proactively identify automation opportunities All About You Experience with data analysis, with a background in building KPIs and reporting. PowerBI experience preferred or other reporting tools like Tableau, DOMO. Experience with PowerApps or other No/Low-code app development tools is a plus Experience in systems analysis and application design and development. Ability to deliver technology products/services in a high growth environment where priorities change rapidly. Proactive self-starter seeking initiatives for advancement. Understanding of data architecture and some experience in building logical/conceptual data models or creating data mapping documentation. Experience with data validation, quality control, and cleansing processes for new and existing data sources. Strong problem-solving, quantitative, and analytical skills. Advanced SQL skills, ability to write optimized queries for large data sets. Exposure to Python, Scala, Spark, Cloud, and other related technologies is advantageous. In-depth technical knowledge, and ability to learn new technologies. Attention to detail and quality. Team player with effective communication skills. Must be able to interact with management, internal stakeholders, and collect requirements. Must be able to perform in a team, use judgment, and operate under ambiguity. Experience in leveraging generative AI tools to enhance day-to-day tasks is beneficial. Corporate Security Responsibility All activities involving access to Mastercard assets, information, and networks comes with an inherent risk to the organization and, therefore, it is expected that every person working for, or on behalf of, Mastercard is responsible for information security and must: Abide by Mastercard’s security policies and practices; Ensure the confidentiality and integrity of the information being accessed; Report any suspected information security violation or breach, and Complete all periodic mandatory security trainings in accordance with Mastercard’s guidelines. R-251036 Show more Show less

Posted 3 days ago

Apply

5.0 - 10.0 years

15 - 20 Lacs

Hyderabad

Work from Office

Naukri logo

5+ years in Data Engineer, Hands on experience in: 1. Spark 2. Kafka 3. Java 4. AWS

Posted 3 days ago

Apply

5.0 years

0 Lacs

Trivandrum, Kerala, India

On-site

Linkedin logo

What you’ll do? Design, develop, and operate high scale applications across the full engineering stack. Design, develop, test, deploy, maintain, and improve software. Apply modern software development practices (serverless computing, microservices architecture, CI/CD, infrastructure-as-code, etc.) Work across teams to integrate our systems with existing internal systems, Data Fabric, CSA Toolset. Participate in technology roadmap and architecture discussions to turn business requirements and vision into reality. Participate in a tight-knit, globally distributed engineering team. Triage product or system issues and debug/track/resolve by analyzing the sources of issues and the impact on network, or service operations and quality. Research, create, and develop software applications to extend and improve on Equifax Solutions. Manage sole project priorities, deadlines, and deliverables. Collaborate on scalability issues involving access to data and information. Actively participate in Sprint planning, Sprint Retrospectives, and other team activity What experience you need? Bachelor's degree or equivalent experience 5+ years of software engineering experience 5+ years experience writing, debugging, and troubleshooting code in mainstream Java, SpringBoot, TypeScript/JavaScript, HTML, CSS 5+ years experience with Cloud technology: GCP, AWS, or Azure 5+ years experience designing and developing cloud-native solutions 5+ years experience designing and developing microservices using Java, SpringBoot, GCP SDKs, GKE/Kubernetes 5+ years experience deploying and releasing software using Jenkins CI/CD pipelines, understand infrastructure-as-code concepts, Helm Charts, and Terraform constructs What could set you apart? Knowledge or experience with Apache Beam for stream and batch data processing. Familiarity with big data tools and technologies like Apache Kafka, Hadoop, or Spark. Experience with containerization and orchestration tools (e.g., Docker, Kubernetes). Exposure to data visualization tools or platforms. Show more Show less

Posted 3 days ago

Apply

5.0 - 10.0 years

5 - 15 Lacs

Gurugram

Hybrid

Naukri logo

IntraEdge is looking for BigData Engineers/developers who will work on the collecting, storing, processing, and analyzing of huge sets of data. One will also be responsible for integrating them with the architecture used across the company. Responsibilities- Selecting and integrating any Big Data tools and frameworks required to provide requested capabilities. Partners with architects and other senior leads to address the data needs Partners with Data Scientists and product teams to build and deploy machine learning models that unlock growth Build custom integration and data pipelines between cloud-based systems using APIs Write complex and efficient code to transform raw data sources into easily accessible models by coding several languages such as Python, Scala or SQL . Design, develop and test a large-scale, custom-distributed software system using the latest Java, Scala and Big data technologies. Actively contribute to the technological strategy definition (design, architecture and interfaces) in order to effectively respond to our client's business needs Participate in technological watch and the definition of standards to ensure that our systems and data warehouses are efficient, resilient and durable Experienced in using Informatica or similar products, with an understanding of heterogeneous data replication techniques Build data expertise and own data quality for the pipelines you create. Skills and Qualifications- Bachelor/Masters degree in Computer Science, Management of Information Systems or equivalent. 4 or more years of relevant software engineering experience ( Big Data: Hive, Spark, Kafka, Cassandra, Scala, Python, SQL ) in a data-focused role. Experience in GCP Building batch/streaming ETL pipelines with frameworks like Spark, Spark Streaming and Apache Beam and working with messaging systems like Pub/Sub and Kafka . Working experience with Java tools or Apache Camel. Experience in designing and building highly scalable and reliable data pipelines using Big Data ( Airflow, Python, Redshift/Snowflake ) Software development experience with proficiency in Python, Java, Scala or another language. Good knowledge of Big Data querying tools, such as Hive, and experience with Spark/PySpark Good knowledge of SQL, Good Knowledge of Python Ability to analyse and obtain insights from complex/large data sets Design and develop highly performing SQL Server database objects Experience- 5-10 Years Notice period- Serving NP/Immediate joiners/Max 30 days Location- Gurugram/Bangalore/Pune/Remote Salary- Decent hike on Current CTC

Posted 3 days ago

Apply

40.0 years

0 Lacs

Noida, Uttar Pradesh, India

On-site

Linkedin logo

Jubilant Bhartia Group Jubilant Bhartia Group is a global conglomerate founded by Mr. Shyam S Bhartia and Mr. Hari S Bhartia with strong presence in diverse sectors like Pharmaceuticals, Contract Research and Development Services, Proprietary Novel Drugs, Life Science Ingredients, Agri Products, Performance Polymers, Food Service (QSR), Food, Auto, Consulting in Aerospace and Oilfield Services. Jubilant Bhartia Group has four flagships Companies- Jubilant Pharmova Limited, Jubilant Ingrevia Limited, Jubilant FoodWorks Limited and Jubilant Industries Limited. Currently the group has a global workforce of around 43,000 employees. About Jubilant Ingrevia Limited Jubilant Ingrevia is a Global Integrated Life Science Products & Innovative Solutions provider serving, Pharmaceutical, Agrochemical, Nutrition, Consumer and Industrial customers with our customised products & solutions that are innovative, cost effective and conforming to premium quality standards. Ingrevia is born out of a union of “Ingre” denoting Ingredients & “vie” in French meaning Life (i.e. Ingredients for Life) Jubilant Ingrevia history goes back to 1978 with the incorporation of VAM Organics Limited, which later became Jubilant Organosys and then Jubilant Life Sciences and now demerged to an independent entity as Jubilant Ingrevia Limited, which is listed in both the stock exchanges of India. Over the years, company has developed global capacities and leadership in chosen business segments. We have more than 40 years of experience in Life Science Chemicals, 30+ years of experience in Pyridine Chemistry and value added Specialty Chemicals, and 20+ years of experience in Vitamin B3, B4 and other Nutraceutical products. We have strategically segmented our business into three Business Segments as explained below. We are rapidly growing the revenue in all the three segments. Speciality Chemicals Segment : We propose to launch a new platform of Diketene & its value-added derivatives, forward integrate our crop protection chemicals to value-added agrochemicals (Herbicides, Fungicides & Insecticides) by adding new facilities. We are an established ‘partner of choice’ in CDMO, with more Invest plans in GMP & Non-GMP multi-product facility for Pharma & Crop Protection customers. Nutrition & Health Solutions Segment : We propose to expand the existing capacity of Vitamin B3 to continue being one of the market leaders and introduce new branded animal as well as human nutrition and health premixes. Chemical Intermediates Segment : We propose to expand our existing acetic anhydride capacity and add value added anhydrides and aldehydes and enhance volumes in speciality ethanol. We have 5 world-class manufacturing facilities i.e. One in UP at Gajraula, Two in Gujarat at Bharuch and Baroda, Two in Maharashtra at Nira and Ambernath . We operate 61 Plants across these 5 sites giving is multi-plant and multi-location advantage. Find out more about us at www.jubilantingrevia.com The Position Organization- Jubilant Ingrevia Limited Designation - Data Scientist Location- Noida. Job Summary: - Plays a crucial role in helping businesses make informed decisions by leveraging data & will c ollaborate with stakeholders, design data models, create algorithms, and share meaningful insights to drive business success Key Responsibilities. Work with supply chain, manufacturing, Sales managers, customer account managers and quality function to produce algorithms. Gathering and interpreting data from various sources. Cleaning and verifying the accuracy of data sets to ensure data integrity. Developing and implementing data collection systems and strategies to optimize efficiency and accuracy. Applying statistical techniques to analyze and interpret complex data sets. Develop and implement statistical models for predictive analysis. Build and deploy machine learning models to solve business problems. Creating visual representations of data through charts, graphs, and dashboards to communicate findings effectively. Develop dashboards and reports for ongoing monitoring and analysis. Create, modify and improve complex manufacturing schedule. Create scenario planning model for manufacturing, develop manufacturing schedule adherence probability model. Regularly monitoring and evaluating data quality, making recommendations for improvements as necessary, ensuring compliance with data privacy and security regulations. Person Profile . Qualification - B.E/M.Sc Maths/Statistics. Experience - 2-5 Yrs. Desired Skills Desired Skills & Must Have - 2-5 years of relevant experience in chemical/ manufacturing industry. Hands on Generative AI. Exposure to Agentic AI Proficiency in data analysis tools such as Microsoft Excel, SQL, and statistical software (e.g., R or Python). Proficiency in programming languages such as Python or R. Expertise in statistical analysis, machine learning algorithms, and data manipulation. Strong analytical and problem-solving skills with the ability to handle complex data sets. Excellent attention to detail and a high level of accuracy in data analysis. Solid knowledge of data visualization techniques and experience using visualization tools like Tableau or Power BI. Strong communication skills to present findings and insights to non-technical stakeholders effectively Knowledge of statistical methodologies and techniques, including regression analysis, clustering, and hypothesis testing. Familiarity with data modeling and database management concepts. Experience in manipulating and cleansing large data sets. Ability to work collaboratively in a team environment and adapt to changing priorities. Experience with big data technologies (e.g., Hadoop, Spark). Knowledge of cloud platforms (e.g., AWS, Azure, Google Cloud). Familiarity with data engineering and database technologies. Jubilant is an equal opportunity employer. All qualified applicants will receive consideration for employment without regard to age, ancestry, citizenship, colour, gender identity or expression, genetic information, marital status, medical condition, national origin, political affiliation, race, ethnicity, religion or any other characteristic protected by applicable local laws, regulations and ordinances Show more Show less

Posted 3 days ago

Apply

Exploring Spark Jobs in India

The demand for professionals with expertise in Spark is on the rise in India. Spark, an open-source distributed computing system, is widely used for big data processing and analytics. Job seekers in India looking to explore opportunities in Spark can find a variety of roles in different industries.

Top Hiring Locations in India

  1. Bangalore
  2. Pune
  3. Hyderabad
  4. Chennai
  5. Mumbai

These cities have a high concentration of tech companies and startups actively hiring for Spark roles.

Average Salary Range

The average salary range for Spark professionals in India varies based on experience level: - Entry-level: INR 4-6 lakhs per annum - Mid-level: INR 8-12 lakhs per annum - Experienced: INR 15-25 lakhs per annum

Salaries may vary based on the company, location, and specific job requirements.

Career Path

In the field of Spark, a typical career progression may look like: - Junior Developer - Senior Developer - Tech Lead - Architect

Advancing in this career path often requires gaining experience, acquiring additional skills, and taking on more responsibilities.

Related Skills

Apart from proficiency in Spark, professionals in this field are often expected to have knowledge or experience in: - Hadoop - Java or Scala programming - Data processing and analytics - SQL databases

Having a combination of these skills can make a candidate more competitive in the job market.

Interview Questions

  • What is Apache Spark and how is it different from Hadoop? (basic)
  • Explain the difference between RDD, DataFrame, and Dataset in Spark. (medium)
  • How does Spark handle fault tolerance? (medium)
  • What is lazy evaluation in Spark? (basic)
  • Explain the concept of transformations and actions in Spark. (basic)
  • What are the different deployment modes in Spark? (medium)
  • How can you optimize the performance of a Spark job? (advanced)
  • What is the role of a Spark executor? (medium)
  • How does Spark handle memory management? (medium)
  • Explain the Spark shuffle operation. (medium)
  • What are the different types of joins in Spark? (medium)
  • How can you debug a Spark application? (medium)
  • Explain the concept of checkpointing in Spark. (medium)
  • What is lineage in Spark? (basic)
  • How can you monitor and manage a Spark application? (medium)
  • What is the significance of the Spark Driver in a Spark application? (medium)
  • How does Spark SQL differ from traditional SQL? (medium)
  • Explain the concept of broadcast variables in Spark. (medium)
  • What is the purpose of the SparkContext in Spark? (basic)
  • How does Spark handle data partitioning? (medium)
  • Explain the concept of window functions in Spark SQL. (advanced)
  • How can you handle skewed data in Spark? (advanced)
  • What is the use of accumulators in Spark? (advanced)
  • How can you schedule Spark jobs using Apache Oozie? (advanced)
  • Explain the process of Spark job submission and execution. (basic)

Closing Remark

As you explore opportunities in Spark jobs in India, remember to prepare thoroughly for interviews and showcase your expertise confidently. With the right skills and knowledge, you can excel in this growing field and advance your career in the tech industry. Good luck with your job search!

cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies