Home
Jobs

7796 Spark Jobs - Page 26

Filter Interviews
Min: 0 years
Max: 25 years
Min: ₹0
Max: ₹10000000
Setup a job Alert
Filter
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

15.0 years

0 Lacs

Kochi, Kerala, India

On-site

Linkedin logo

Introduction Joining the IBM Technology Expert Labs teams means you'll have a career delivering world-class services for our clients. As the ultimate expert in IBM products, you'll bring together all the necessary technology and services to help customers solve their most challenging problems. Working in IBM Technology Expert Labs means accelerating the time to value confidently and ensuring speed and insight while our clients focus on what they do best—running and growing their business. Excellent onboarding and industry-leading learning culture will set you up for a positive impact, while advancing your career. Our culture is collaborative and experiential. As part of a team, you will be surrounded by bright minds and keen co-creators—always willing to help and be helped—as you apply passion to work that will positively impact the world around us. Your Role And Responsibilities This Candidate is responsible for: DB2 installation and configuration on the below environments. On Prem Multi Cloud Redhat Open shift Cluster HADR Non-DPF and DPF. Migration of other databases to Db2(eg TERADATA / SNOWFLAKE / SAP/ Cloudera to db2 migration) Create high-level designs, detail level designs, maintaining product roadmaps which includes both modernization and leveraging cloud solutions Design scalable, performant, and cost-effective data architectures within the Lakehouse to support diverse workloads, including reporting, analytics, data science, and AI/ML. Perform health check of the databases, make recommendations and deliver tuning. At the Database and system level. Deploy DB2 databases as containers within Red Hat OpenShift clusters Configure containerized database instances, persistent storage, and network settings to optimize performance and reliability. Lead the architectural design and implementation of solutions on IBM watsonx.data, ensuring alignment with overall enterprise data strategy and business objectives. Define and optimize the watsonx.data ecosystem, including integration with other IBM watsonx components (watsonx.ai, watsonx.governance) and existing data infrastructure (DB2, Netezza, cloud data sources) Establish best practices for data modeling, schema evolution, and data organization within the watsonx.data lakehouse Act as a subject matter expert on Lakehouse architecture, providing technical leadership and guidance to data engineering, analytics, and development teams. Mentor junior architects and engineers, fostering their growth and knowledge in modern data platforms. Participate in the development of architecture governance processes and promote best practices across the organization. Communicate complex technical concepts to both technical and non-technical stakeholders. Required Technical And Professional Expertise 15+ years of experience in data architecture, data engineering, or a similar role, with significant hands-on experience in cloud data platforms Strong proficiency in DB2, SQL and Python. Strong understanding of: Database design and modelling(dimensional, normalized, NoSQL schemas) Normalization and indexing Data warehousing and ETL processes Cloud platforms (AWS, Azure, GCP) Big data technologies (e.g., Hadoop, Spark) Database Migration project experience from one database to another database (target database Db2). Experience in deployment of DB2 databases as containers within Red Hat OpenShift clusters and configure containerized database instances, persistent storage, and network settings to optimize performance and reliability. Excellent communication, collaboration, problem-solving, and leadership skills Preferred Technical And Professional Experience Experience with machine learning environments and LLMs Certification in IBM watsonx.data or related IBM data and AI technologies Hands-on experience with Lakehouse platform (e.g., Databricks, Snowflake) Having exposure to implement or understanding of DB replication process Experience with integrating watsonx.data with GenAI or LLM initiatives (e.g., RAG architectures). Experience with NoSQL databases (e.g., MongoDB, Cassandra). Experience in data modeling tools (e.g., ER/Studio, ERwin). Knowledge of data governance and compliance standards (e.g., GDPR, HIPAA).Soft Skills Show more Show less

Posted 2 days ago

Apply

1.0 - 3.0 years

8 - 16 Lacs

Noida

Work from Office

Naukri logo

-Proficient in Java, Spark, Kafka for real-time processing -Skilled in HBase for NoSQL on on-prem clusters -Strong in data modeling for scalable NoSQL systems -Built ETL pipelines using Spark for transformation -Knowledge of Hadoop cluster management Required Candidate profile -Bachelor’s in CS or related field -Familiar with version control systems, particularly Git -Knowledge of AWS, Azure, or GCP -Understanding of distributed databases, especially HBase

Posted 2 days ago

Apply

2.0 years

0 Lacs

India

On-site

Linkedin logo

EGNYTE YOUR CAREER. SPARK YOUR PASSION. Egnyte is a place where we spark opportunities for amazing people. We believe that every role has meaning, and every Egnyter should be respected. With 22,000+ customers worldwide and growing, you can make an impact by protecting their valuable data. When joining Egnyte, you’re not just landing a new career, you become part of a team of Egnyters that are doers, thinkers, and collaborators who embrace and live by our values: Invested Relationships Fiscal Prudence Candid Conversations ABOUT EGNYTE Egnyte is the secure multi-cloud platform for content security and governance that enables organizations to better protect and collaborate on their most valuable content. Established in 2008, Egnyte has democratized cloud content security for more than 22,000 organizations, helping customers improve data security, maintain compliance, prevent and detect ransomware threats, and boost employee productivity on any app, any cloud, anywhere. For more information, visit www.egnyte.com . Egnyte is looking for an experienced Software Engineer to join our team and work on some exciting projects. You should possess excellent communication skills in addition to the desired technical experience. WHAT YOU’LL DO Design and develop highly scalable elastic cloud architecture that seamlessly integrates with on-premises systems Challenge and redefine existing architectural fundamentals to provide the next level of performance and scalability; ability to foresee post-deployment design challenges, performance and scale bottlenecks Work with multicultural, geographically distributed teams and closely coordinate with cross-functional teams in multiple time zones. Deliver enterprise-grade products to customers and continuously work with the engineering team to refine products in the field Collaborate with DevOps and SRE teams and work closely with CTO on roadmap items Extensive penetration testing to ensure security across a hybrid deployment between public/private cloud Monitor and manage 3,000+ nodes using modern DevOps tools and APM solutions Proactive performance and exception analysis YOUR QUALIFICATIONS 2-5 years of relevant industry work experience BS or MS degree in Computer Science or related field Demonstrated success in designing and developing complex systems Expertise with multi-tenant, highly complex, cloud solutions Experience owning all aspects of software engineering, from design to implementation, QA and maintenance Experience with the following technologies: Java, SQL, Linux, Python, HBase/BigTable Data-driven decision process Relies on unit testing instead of manual QA Knowledge of DevOps techniques BONUS SKILLS Experience with Hybrid and/or on-premises solutions Experience in working with AWS or GCP Experience with the following technologies: Nginx, Haproxy, BigQuery, New Relic, Graphite, and/or Puppet Security / Governance expertise COMMITMENT TO DIVERSITY, EQUITY, AND INCLUSION: At Egnyte, we celebrate our differences and thrive on our diversity for our employees, our products, our customers, our investors, and our communities. Egnyters are encouraged to bring their whole selves to work and to appreciate the many differences that collectively make Egnyte a higher-performing company and a great place to be. Show more Show less

Posted 2 days ago

Apply

3.0 - 7.0 years

0 Lacs

Trivandrum, Kerala, India

On-site

Linkedin logo

Role Description Job Title: AI Engineer Location: Kochi / Trivandrum Experience: 3-7 Years About The Role We are seeking a talented and experienced AI Engineer to join our growing team and play a pivotal role in the development and deployment of innovative AI solutions. This individual will be a key contributor to our AI transformation, working closely with AI Architects, Data Scientists, and delivery teams to bring cutting-edge AI concepts to life. Key Responsibilities Model Development & Implementation: Design, develop, and implement machine learning models and AI algorithms, from initial prototyping to production deployment. Data Engineering: Work with large and complex datasets, performing data cleaning, feature engineering, and data pipeline development to prepare data for AI model training. Solution Integration Integrate AI models and solutions into existing enterprise systems and applications, ensuring seamless functionality and performance. Model Optimization & Performance Optimize AI models for performance, scalability, and efficiency, and monitor their effectiveness in production environments. Collaboration & Communication: Collaborate effectively with cross-functional teams, including product managers, data scientists, and software engineers, to understand requirements and deliver impactful AI solutions. Code Quality & Best Practices Write clean, maintainable, and well-documented code, adhering to best practices for software development and MLOps. Research & Evaluation Stay updated with the latest advancements in AI/ML research and technologies, evaluating their potential application to business challenges. Troubleshooting & Support Provide technical support and troubleshooting for deployed AI systems, identifying and resolving issues promptly. Key Requirements 3-7 years of experience in developing and deploying AI/ML solutions. Strong programming skills in Python (or similar languages) with extensive experience in AI/ML frameworks (e.g., TensorFlow, PyTorch, scikit-learn). Solid understanding of machine learning algorithms, deep learning concepts, and statistical modelling. Experience with data manipulation and analysis libraries (e.g., Pandas, NumPy). Familiarity with cloud platforms (e.g., AWS, Azure, GCP) and their AI/ML services. Experience with version control systems (e.g., Git) and collaborative development workflows. Excellent problem-solving skills and attention to detail. Strong communication and teamwork abilities. Bachelor’s or master’s degree in computer science, Engineering, Data Science, or a related field. Good To Have Experience with MLOps practices and tools (e.g., MLflow, Kubeflow). Familiarity with containerization technologies (e.g., Docker, Kubernetes). Experience with big data technologies (e.g., Spark, Hadoop). Prior experience in an IT services or product development environment. Knowledge of specific AI domains such as NLP, computer vision, or time series analysis. Key Skills Machine Learning, Deep Learning, Python, TensorFlow, PyTorch, Data Preprocessing, Model Deployment, MLOps, Cloud AI Services, Software Development, Problem-solving. Skills Machine Learning,Data Science,Artificial Intelligence Show more Show less

Posted 2 days ago

Apply

0 years

0 Lacs

Andhra Pradesh, India

On-site

Linkedin logo

Job Description Design, develop, and maintain scalable data pipelines and systems using DBT and Big Data technologies. Collaborate with cross-functional teams to understand data requirements and deliver solutions that meet business needs. Implement data models and transformations using DBT. Develop and maintain ETL processes to ingest and process large volumes of data from various sources. Optimize and troubleshoot data workflows to ensure high performance and reliability. Ensure data quality and integrity through rigorous testing and validation. Monitor and manage data infrastructure, ensuring security and compliance with best practices. Provide technical support and guidance to team members on data engineering best practices. Requirements Bachelor's degree in Computer Science, Information Technology, or a related field. Proven experience as a Data Engineer or in a similar role. Strong proficiency in DBT for data modeling and transformations. Hands-on experience with Big Data technologies (e.g., Hadoop, Spark, Kafka). Proficient in Python for data processing and automation. Experience with SQL and database management. Familiarity with data warehousing concepts and best practices. Excellent problem-solving skills and attention to detail. Strong communication and collaboration skills. Preferred Qualifications Experience with cloud platforms (e.g., AWS, Azure, Google Cloud). Knowledge of data governance and security practices. Certification in relevant technologies (e.g., DBT, Big Data platforms). Show more Show less

Posted 2 days ago

Apply

7.0 years

0 Lacs

Andhra Pradesh, India

On-site

Linkedin logo

7+ years of experience in Big Data with strong expertise in Spark and Scala Mandatory Skills: Big Data Primarily Spark and Scala Strong Knowledge in HDFS, Hive, Impala with knowledge on Unix , Oracle, Autosys, Good to Have : Agile Methodology and Banking Expertise Strong Communication Skills Not limited to Spark batch, need Spark streaming experience No SQL DB Experience : HBase/Mongo/Couchbase Show more Show less

Posted 2 days ago

Apply

5.0 - 7.0 years

10 - 14 Lacs

Hyderabad

Work from Office

Naukri logo

Design, develop, and maintain scalable data processing applications using Spark and PySpark API Development 5+ years of experience in at least one of the following: Java, Spark, scala, Python API Development expertise. Write efficient, reusable, and well-documented code. Design and implement data pipelines using tools like Spark and PySpark. Strong analytical and problem-solving abilities to address technical challenges. Perform code reviews and provide constructive feedback to improve code quality. Design and implement data processing tasks that integrate with SQL databases. Proficiency in data modeling, data lake, lakehouse, and data warehousing concepts. Experience with cloud platforms like AWS

Posted 2 days ago

Apply

89.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Linkedin logo

Full-time Company Description GFK - Growth from Knowledge. For over 89 years, we have earned the trust of our clients around the world by solving critical questions in their decision-making process. We fuel their growth by providing a complete understanding of their consumers’ buying behavior, and the dynamics impacting their markets, brands and media trends. In 2023, GfK combined with NIQ, bringing together two industry leaders with unparalleled global reach. With a holistic retail read and the most comprehensive consumer insights - delivered with advanced analytics through state-of-the-art platforms - GfK drives “Growth from Knowledge”. Job Description It's an exciting time to be a builder. Constant technological advances are creating an exciting new world for those who understand the value of data. The mission of NIQ’s Media Division is to turn NIQ into the global leader that transforms how consumer brands plan, activate and measure their media activities. Recombine is the delivery area focused on maximising the value of data assets in our NIQ Media Division. We apply advanced statistical and machine learning techniques to unlock deeper insights, whilst integrating data from multiple internal and external sources. Our teams develop data integration products across various markets and product areas, delivering enriched datasets that power client decision-making. Role Overview We are looking for a Principal Software Engineer for our Recombine delivery area to provide technical leadership within our development teams, ensuring best practices, architectural coherence, and effective collaboration across projects. This role is ideal for a highly experienced engineer who can bridge the gap between data engineering, data science, and software engineering, helping teams build scalable, maintainable, and well-structured data solutions. As a Principal Software Engineer, you will play a hands-on role in designing and implementing solutions while mentoring developers, influencing technical direction, and driving best practices in software and data engineering. This role includes line management responsibilities, ensuring the growth and development of team members. The role will be working within an AWS environment, leveraging the power of cloud-native technologies and modern data platforms Key Responsibilities Technical Leadership & Architecture Act as a technical architect, ensuring alignment between the work of multiple development teams in data engineering and data science. Design scalable, high-performance data processing solutions within AWS, considering factors such as governance, security, and maintainability. Drive the adoption of best practices in software development, including CI/CD, testing strategies, and cloud-native architecture. Work closely with Product Owners to translate business needs into technical solutions. Hands-on Development & Technical Excellence Lead by example through high-quality coding, code reviews, and proof-of-concept development. Solve complex engineering problems and contribute to critical design decisions. Ensure effective use of AWS services, including AWS Glue, AWS Lambda, Amazon S3, Redshift, and EMR. Develop and optimise data pipelines, data transformations, and ML workflows in a cloud environment. Line Management & Team Development Provide line management to engineers, ensuring their professional growth and development. Conduct performance reviews, set development goals, and mentor team members to enhance their skills. Foster a collaborative and high-performing engineering culture, promoting knowledge sharing and continuous improvement beyond team boundaries. Support hiring, onboarding, and career development initiatives within the engineering team. Collaboration & Cross-Team Coordination Act as the technical glue between data engineers, data scientists, and software developers, ensuring smooth integration of different components. Provide mentorship and guidance to developers, helping them level up their skills and technical understanding. Work with DevOps teams to improve deployment pipelines, observability, and infrastructure as code. Engage with stakeholders across the business, translating technical concepts into business-relevant insights. Governance, Security & Data Best Practices Champion data governance, lineage, and security across the platform. Advocate for and implement scalable data architecture patterns, such as Data Mesh, Lakehouse, or event-driven pipelines. Ensure compliance with industry standards, internal policies, and regulatory requirements. Qualifications Requirements & Experience Strong software engineering background with experience in designing and building production-grade applications in Python, Scala, Java, or similar languages. Proven experience with AWS-based data platforms, specifically AWS Glue, Redshift, Athena, S3, Lambda, and EMR. Expertise in Apache Spark and AWS Lake Formation, with experience building large-scale distributed data pipelines. Experience with workflow orchestration tools like Apache Airflow or AWS Step Functions. Cloud experience in AWS, including containerisation (Docker, Kubernetes, ECS, EKS) and infrastructure as code (Terraform, CloudFormation). Strong knowledge of modern software architecture, including microservices, event-driven systems, and distributed computing. Experience leading teams in an agile environment, with a strong understanding of CI/CD pipelines, automated testing, and DevOps practices. Excellent problem-solving and communication skills, with the ability to engage with both technical and non-technical stakeholders. Proven line management experience, including mentoring, career development, and performance management of engineering teams. Additional Information Our Benefits Flexible working environment Volunteer time off LinkedIn Learning Employee-Assistance-Program (EAP) About NIQ NIQ is the world’s leading consumer intelligence company, delivering the most complete understanding of consumer buying behavior and revealing new pathways to growth. In 2023, NIQ combined with GfK, bringing together the two industry leaders with unparalleled global reach. With a holistic retail read and the most comprehensive consumer insights—delivered with advanced analytics through state-of-the-art platforms—NIQ delivers the Full View™. NIQ is an Advent International portfolio company with operations in 100+ markets, covering more than 90% of the world’s population. For more information, visit NIQ.com Want to keep up with our latest updates? Follow us on: LinkedIn | Instagram | Twitter | Facebook Our commitment to Diversity, Equity, and Inclusion NIQ is committed to reflecting the diversity of the clients, communities, and markets we measure within our own workforce. We exist to count everyone and are on a mission to systematically embed inclusion and diversity into all aspects of our workforce, measurement, and products. We enthusiastically invite candidates who share that mission to join us. We are proud to be an Equal Opportunity/Affirmative Action-Employer, making decisions without regard to race, color, religion, gender, gender identity or expression, sexual orientation, national origin, genetics, disability status, age, marital status, protected veteran status or any other protected class. Our global non-discrimination policy covers these protected classes in every market in which we do business worldwide. Learn more about how we are driving diversity and inclusion in everything we do by visiting the NIQ News Center: https://nielseniq.com/global/en/news-center/diversity-inclusion I'm interested I'm interested Privacy Policy Show more Show less

Posted 2 days ago

Apply

1.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Linkedin logo

We're looking for a Associate Cloud Operations Engineer This role is Office Based, Pune Office As Associate Cloud Operations Engineer you will be responsible for supporting and maintaining cloud-based infrastructure, ensuring system reliability, performance and security. You will collaborate with cross-functional teams to monitor cloud services, automate operational tasks and troubleshoot issues in a fast-paced environment. In this role, you will... Monitor and Maintain Cloud Infrastructure: Ensure high availability, performance and security of cloud-based services and applications. Incident Response & Troubleshooting: Investigate and resolve cloud-related incidents, working closely with senior engineers to address issues efficiently. Automation & Scripting: Assist in automating repetitive operational tasks using scripting languages like Python, Bash. System & Network Monitoring: Utilize monitoring tools such as Prometheus, Grafana or ELK APM to track system health and performance. Security & Compliance: Follow best practices to ensure cloud environments are secure and compliant with industry standards. Deployment & Configuration Management: Support deployments using CI/CD like Ansible. Collaboration & Documentation: Work with engineering teams to optimize cloud resources and maintain detailed documentation for operational processes. You’ve Got What It Takes If You Have... Bachelor's degree in Computer Science, Information Technology or a related field (or equivalent practical experience). 1+ years of experience in cloud operations. Basic knowledge of cloud platforms (AWS, Azure, or Google Cloud). Familiarity with Linux and Windows server environments. Understanding of networking concepts (DNS, VPNs, load balancing, firewalls). Experience with scripting (Python or Bash) is a plus. Exposure to monitoring/logging tools like Prometheus or ELK stack. Strong communication and collaboration skills. Strong troubleshooting and problem-solving skills Our Culture Spark Greatness. Shatter Boundaries. Share Success. Are you ready? Because here, right now – is where the future of work is happening. Where curious disruptors and change innovators like you are helping communities and customers enable everyone – anywhere – to learn, grow and advance. To be better tomorrow than they are today. Who We Are Cornerstone powers the potential of organizations and their people to thrive in a changing world. Cornerstone Galaxy, the complete AI-powered workforce agility platform, meets organizations where they are. With Galaxy, organizations can identify skills gaps and development opportunities, retain and engage top talent, and provide multimodal learning experiences to meet the diverse needs of the modern workforce. More than 7,000 organizations and 100 million+ users in 180+ countries and in nearly 50 languages use Cornerstone Galaxy to build high-performing, future-ready organizations and people today. Check us out on LinkedIn , Comparably , Glassdoor , and Facebook ! Show more Show less

Posted 2 days ago

Apply

0 years

0 Lacs

Pune, Maharashtra, India

On-site

Linkedin logo

Eviden, part of the Atos Group, with an annual revenue of circa € 5 billion is a global leader in data-driven, trusted and sustainable digital transformation. As a next generation digital business with worldwide leading positions in digital, cloud, data, advanced computing and security, it brings deep expertise for all industries in more than 47 countries. By uniting unique high-end technologies across the full digital continuum with 47,000 world-class talents, Eviden expands the possibilities of data and technology, now and for generations to come. Roles and Responsibility The Senior Tech Lead - Databricks leads the design, development, and implementation of advanced data solutions. Has To have extensive experience in Databricks, cloud platforms, and data engineering, with a proven ability to lead teams and deliver complex projects. Responsibilities Lead the design and implementation of Databricks-based data solutions. Architect and optimize data pipelines for batch and streaming data. Provide technical leadership and mentorship to a team of data engineers. Collaborate with stakeholders to define project requirements and deliverables. Ensure best practices in data security, governance, and compliance. Troubleshoot and resolve complex technical issues in Databricks environments. Stay updated on the latest Databricks features and industry trends. Key Technical Skills & Responsibilities Experience in data engineering using Databricks or Apache Spark-based platforms. Proven track record of building and optimizing ETL/ELT pipelines for batch and streaming data ingestion. Hands-on experience with Azure services such as Azure Data Factory, Azure Data Lake Storage, Azure Databricks, Azure Synapse Analytics, or Azure SQL Data Warehouse. Proficiency in programming languages such as Python, Scala, SQL for data processing and transformation. Expertise in Spark (PySpark, Spark SQL, or Scala) and Databricks notebooks for large-scale data processing. Familiarity with Delta Lake, Delta Live Tables, and medallion architecture for data lakehouse implementations. Experience with orchestration tools like Azure Data Factory or Databricks Jobs for scheduling and automation. Design and implement the Azure key vault and scoped credentials. Knowledge of Git for source control and CI/CD integration for Databricks workflows, cost optimization, performance tuning. Familiarity with Unity Catalog, RBAC, or enterprise-level Databricks setups. Ability to create reusable components, templates, and documentation to standardize data engineering workflows is a plus. Ability to define best practices, support multiple projects, and sometimes mentor junior engineers is a plus. Must have experience of working with streaming data sources and Kafka (preferred) Eligibility Criteria Bachelor’s degree in Computer Science, Data Engineering, or a related field Extensive experience with Databricks, Delta Lake, PySpark, and SQL Databricks certification (e.g., Certified Data Engineer Professional) Experience with machine learning and AI integration in Databricks Strong understanding of cloud platforms (AWS, Azure, or GCP) Proven leadership experience in managing technical teams Excellent problem-solving and communication skills Our Offering Global cutting-edge IT projects that shape the future of digital and have a positive impact on environment. Wellbeing programs & work-life balance - integration and passion sharing events. Attractive Salary and Company Initiative Benefits Courses and conferences Attractive Salary Hybrid work culture Let’s grow together. Show more Show less

Posted 2 days ago

Apply

4.0 - 8.0 years

10 - 19 Lacs

Chennai

Hybrid

Naukri logo

Greetings from Getronics! We have permanent opportunities for GCP Data Engineers in Chennai Location . Hope you are doing well! This is Abirami from Getronics Talent Acquisition team. We have multiple opportunities for GCP Data Engineers for our automotive client in Chennai Sholinganallur location. Please find below the company profile and Job Description. If interested, please share your updated resume, recent professional photograph and Aadhaar proof at the earliest to abirami.rsk@getronics.com. Company : Getronics (Permanent role) Client : Automobile Industry Experience Required : 4+ Years in IT and minimum 3+ years in GCP Data Engineering Location : Chennai (Elcot - Sholinganallur) Work Mode : Hybrid Position Description: We are currently seeking a seasoned GCP Cloud Data Engineer with 3 to 5 years of experience in leading/implementing GCP data projects, preferrable implementing complete data centric model. This position is to design & deploy Data Centric Architecture in GCP for Materials Management platform which would get / give data from multiple applications modern & Legacy in Product Development, Manufacturing, Finance, Purchasing, N-Tier supply Chain, Supplier collaboration Design and implement data-centric solutions on Google Cloud Platform (GCP) using various GCP tools like Storage Transfer Service, Cloud Data Fusion, Pub/Sub, Data flow, Cloud compression, Cloud scheduler, Gutil, FTP/SFTP, Dataproc, BigTable etc. • Build ETL pipelines to ingest the data from heterogeneous sources into our system • Develop data processing pipelines using programming languages like Java and Python to extract, transform, and load (ETL) data • Create and maintain data models, ensuring efficient storage, retrieval, and analysis of large datasets • Deploy and manage databases, both SQL and NoSQL, such as Bigtable, Firestore, or Cloud SQL, based on project requirements infrastructure. Skill Required: - GCP Data Engineer, Hadoop, Spark/Pyspark, Google Cloud Platform (Google Cloud Platform) services: BigQuery, DataFlow, Pub/Sub, BigTable, Data Fusion, DataProc, Cloud Compose, Cloud SQL, Compute Engine, Cloud Functions, and App Engine. - 4+ years of professional experience in: o Data engineering, data product development and software product launches. - 3+ years of cloud data/software engineering experience building scalable, reliable, and cost- effective production batch and streaming data pipelines using: Data warehouses like Google BigQuery. Workflow orchestration tools like Airflow. Relational Database Management System like MySQL, PostgreSQL, and SQL Server. Real-Time data streaming platform like Apache Kafka, GCP Pub/Sub. Education Required: Any Bachelors' degree Candidate should be willing to take GCP assessment (1-hour online video test) LOOKING FOR IMMEDIATE TO 30 DAYS NOTICE CANDIDATES ONLY. Regards, Abirami Getronics Recruitment team

Posted 2 days ago

Apply

0.0 - 1.0 years

0 Lacs

Bengaluru, Karnataka

Remote

Indeed logo

Position: Data Engineer , External • Exp range: 3-6 (Z2) • Key Responsibilities: o We are seeking a skilled Data Engineer to design, build, and maintain scalable data pipelines and infrastructure. You will play a crucial role in our data ecosystem by working with cloud technologies to enable data accessibility, quality, and insights across the organization. This role requires expertise in Azure Databricks, Snowflake, and DBT. Requirements: • Bachelor’s in Computer Science, Data Engineering, or related field. • Proficiency in Azure Databricks for data processing and pipeline orchestration. • Experience with Snowflake as a data warehouse platform and DBT for transformations. • Strong SQL skills and understanding of data modeling principles. • Ability to troubleshoot and optimize data workflows. *Responsibilities for Internal Candidates Key Responsibilities: • Data Pipeline Development: Design, build, and optimize data pipelines to ingest, transform, and load data from multiple sources, using Azure Databricks, Snowflake, and DBT. • Data Architecture: Develop and manage data models within Snowflake, ensuring efficient data organization and accessibility. • Data Transformation: Implement transformations in DBT, standardizing data for analysis and reporting. • Performance Optimization: Monitor and optimize pipeline performance, troubleshooting and resolving issues as needed. • Collaboration: Work closely with data scientists, analysts, and other stakeholders to support data-driven projects and provide access to reliable, well-structured data. Qualifications: • Having relevant Experience in MS Azure, Snowflake, DBT& Big Data Hadoop eco-system components • Understanding of Hadoop Architecture and underlying framework including Storage Management. • Strong understand and implementation experience in Hadoop, Spark, Hive/Databricks • Expertise in implementing Data lake solution using Scala as well as Python. • Expertise with orchestration tool like Azure Data Factory • Strong SQL and Programing skills • Experience with DataBricks is desirable • Understanding / Implementation experience with CICD tools such as Jenkins, Azure DevOps, GITHUB is desirable."-- Job Types: Full-time, Permanent Pay: Up to ₹800,000.00 per year Benefits: Health insurance Provident Fund Work from home Schedule: Day shift Monday to Friday Supplemental Pay: Performance bonus Quarterly bonus Shift allowance Education: Bachelor's (Preferred) Experience: Overall : 3 years (Preferred) Snowflake: 1 year (Preferred) Location: Bangalore, Karnataka (Preferred) Work Location: In person

Posted 2 days ago

Apply

7.0 years

0 Lacs

Noida, Uttar Pradesh, India

On-site

Linkedin logo

Experience: 7+ Years Location: Noida-Sector 64 Key Responsibilities: Data Architecture Design: Design, develop, and maintain the enterprise data architecture, including data models, database schemas, and data flow diagrams. Develop a data strategy and roadmap that aligns with the business objectives and ensures the scalability of data systems. Architect both transactional (OLTP) and analytical (OLAP) databases, ensuring optimal performance and data consistency. Data Integration & Management: Oversee the integration of disparate data sources into a unified data platform, leveraging ETL/ELT processes and data integration tools. Design and implement data warehousing solutions, data lakes, and/or data marts that enable efficient storage and retrieval of large datasets. Ensure proper data governance, including the definition of data ownership, security, and privacy controls in accordance with compliance standards (GDPR, HIPAA, etc.). Collaboration with Stakeholders: Work closely with business stakeholders, including analysts, developers, and executives, to understand data requirements and ensure that the architecture supports analytics and reporting needs. Collaborate with DevOps and engineering teams to optimize database performance and support large-scale data processing pipelines. Technology Leadership: Guide the selection of data technologies, including databases (SQL/NoSQL), data processing frameworks (Hadoop, Spark), cloud platforms (Azure is a must), and analytics tools. Stay updated on emerging data management technologies, trends, and best practices, and assess their potential application within the organization. Data Quality & Security: Define data quality standards and implement processes to ensure the accuracy, completeness, and consistency of data across all systems. Establish protocols for data security, encryption, and backup/recovery to protect data assets and ensure business continuity. Mentorship & Leadership: Lead and mentor data engineers, data modelers, and other technical staff in best practices for data architecture and management. Provide strategic guidance on data-related projects and initiatives, ensuring that all efforts are aligned with the enterprise data strategy. Required Skills & Experience: Extensive Data Architecture Expertise: Over 7 years of experience in data architecture, data modeling, and database management. Proficiency in designing and implementing relational (SQL) and non-relational (NoSQL) database solutions. Strong experience with data integration tools (Azure Tools are a must + any other third party tools), ETL/ELT processes, and data pipelines. Advanced Knowledge of Data Platforms: Expertise in Azure cloud data platform is a must. Other platforms such as AWS (Redshift, S3), Azure (Data Lake, Synapse), and/or Google Cloud Platform (BigQuery, Dataproc) is a bonus. Experience with big data technologies (Hadoop, Spark) and distributed systems for large-scale data processing. Hands-on experience with data warehousing solutions and BI tools (e.g., Power BI, Tableau, Looker). Data Governance & Compliance: Strong understanding of data governance principles, data lineage, and data stewardship. Knowledge of industry standards and compliance requirements (e.g., GDPR, HIPAA, SOX) and the ability to architect solutions that meet these standards. Technical Leadership: Proven ability to lead data-driven projects, manage stakeholders, and drive data strategies across the enterprise. Strong programming skills in languages such as Python, SQL, R, or Scala. Certification: Azure Certified Solution Architect, Data Engineer, Data Scientist certifications are mandatory. Pre-Sales Responsibilities: Stakeholder Engagement: Work with product stakeholders to analyze functional and non-functional requirements, ensuring alignment with business objectives. Solution Development: Develop end-to-end solutions involving multiple products, ensuring security and performance benchmarks are established, achieved, and maintained. Proof of Concepts (POCs): Develop POCs to demonstrate the feasibility and benefits of proposed solutions. Client Communication: Communicate system requirements and solution architecture to clients and stakeholders, providing technical assistance and guidance throughout the pre-sales process. Technical Presentations: Prepare and deliver technical presentations to prospective clients, demonstrating how proposed solutions meet their needs and requirements. Additional Responsibilities: Stakeholder Collaboration: Engage with stakeholders to understand their requirements and translate them into effective technical solutions. Technology Leadership: Provide technical leadership and guidance to development teams, ensuring the use of best practices and innovative solutions. Integration Management: Oversee the integration of solutions with existing systems and third-party applications, ensuring seamless interoperability and data flow. Performance Optimization: Ensure solutions are optimized for performance, scalability, and security, addressing any technical challenges that arise. Quality Assurance: Establish and enforce quality assurance standards, conducting regular reviews and testing to ensure robustness and reliability. Documentation: Maintain comprehensive documentation of the architecture, design decisions, and technical specifications. Mentoring: Mentor fellow developers and team leads, fostering a collaborative and growth-oriented environment. Qualifications: Education: Bachelor’s or master’s degree in computer science, Information Technology, or a related field. Experience: Minimum of 7 years of experience in data architecture, with a focus on developing scalable and high-performance solutions. Technical Expertise: Proficient in architectural frameworks, cloud computing, database management, and web technologies. Analytical Thinking: Strong problem-solving skills, with the ability to analyze complex requirements and design scalable solutions. Leadership Skills: Demonstrated ability to lead and mentor technical teams, with excellent project management skills. Communication: Excellent verbal and written communication skills, with the ability to convey technical concepts to both technical and non-technical stakeholders. Show more Show less

Posted 2 days ago

Apply

0 years

0 Lacs

Mumbai Metropolitan Region

Remote

Linkedin logo

With Confluent, organisations can harness the full power of continuously flowing data to innovate and win in the modern digital world. We have a purpose that drives us to do better everyday – we're creating an entirely new category within data infrastructure - data streaming. This technology will allow every organisation to create experiences and use the power of data in ways that profoundly impact the way we all live. This impact is our purpose and drives us to do better every day. One Confluent. One team. One Data Streaming Platform. Data Connects Us. About The Role We are looking for a Senior Consulting Engineer to join our Customer Success and Professional Services team. As a Consulting Engineer, you will help customers leverage streaming architectures and applications to achieve their business results. In this role, you will interact directly with customers to provide software architecture, design, and operations expertise that leverages your deep knowledge of and experience in Apache Kafka, the Confluent platform, and complementary systems such as Hadoop, Spark, Storm, relational and NoSQL databases. You will develop and advocate best practices, gather and validate critical product feedback, and help customers overcome their operational challenges. Throughout all these interactions, you will build a strong relationship with your customer in a very short space of time, ensuring exemplary delivery standards. You will also have the opportunity to help customers build state-of-the-art streaming data infrastructure, in partnership with colleagues who are widely recognized as industry leaders, as well as optimizing and debugging customers existing deployments. What You Will Do Helping a customer determine their platform and/or application strategy for moving to a more real-time, event-based business. Such engagements often involve remote preparation; presenting an onsite or remote workshop for the customer’s architects, developers, and operations teams; investigating (with Engineering and other coworkers) solutions to difficult challenges; and writing a recommendations summary doc. Providing feedback to the Confluent Product and Engineering groups Building tooling for another team or the wider company to help us push our technical boundaries and improve our ability to deliver consistently with high quality Testing performance and functionality of new components developed by Engineering Writing or editing documentation and knowledge base articles, including reference architecture materials and design patterns based on customer experiences Honing your skills, building applications, or trying out new product features Participating in community and industry events What You Will Bring Deep experience designing, building, and operating in-production Big Data, stream processing, and/or enterprise data integration solutions, ideally using Apache Kafka Demonstrated experience successfully managing multiple B2B infrastructure software development projects, including driving expansion, customer satisfaction, feature adoption, and retention Experience operating Linux (configure, tune, and troubleshoot both RedHat and Debian-based distributions)Experience using cloud providers (Amazon Web Services, Google Cloud, Microsoft Azure) for running high-throughput systems Experience with Java Virtual Machine (JVM) tuning and troubleshooting Experience with distributed systems (Kafka, Hadoop, Cassandra, etc.) Strong desire to tackle hard technical problems, and proven ability to do so with little or no direct daily supervision Excellent communication skills, with an ability to clearly and concisely explain tricky issues and complex solutions Ability to quickly learn new technologies Ability and willingness to travel up to 20% of the time to meet with customers Come As You Are At Confluent, equality is a core tenet of our culture. We are committed to building an inclusive global team that represents a variety of backgrounds, perspectives, beliefs, and experiences. The more diverse we are, the richer our community and the broader our impact. Employment decisions are made on the basis of job-related criteria without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, disability, veteran status, or any other classification protected by applicable law. Click HERE to review our Candidate Privacy Notice which describes how and when Confluent, Inc., and its group companies, collects, uses, and shares certain personal information of California job applicants and prospective employees. Show more Show less

Posted 2 days ago

Apply

13.0 - 16.0 years

15 - 18 Lacs

Bengaluru

Work from Office

Naukri logo

Group Strategic Analytics (GSA) is part of Group Chief Operation Office (COO) which acts as the bridge between the Banks business and infrastructure functions to help deliver the efficiency, control, and transformation goals of the Bank. You will work within the Global Strategic Analytics Team as part of a global model strategy and deployment of Name List Screening and Transaction Screening. To be successful in that role, you will be familiar with the most recent data science methodologies and have a delivery-centric attitude, strong analytical skills, and a detail-oriented approach to breaking down complex matters into more understandable details. The purpose of Name List Screening and Transaction Screening is to identify and investigate unusual customer names and transactions and behavior, to understand if that activity is considered suspicious from a financial crime perspective, and to report that activity to the government. You will be responsible for helping to implement and maintain the models for Name List Screening and Transaction Screening to ensure that all relevant criminal risks, typologies, products, and services are properly monitored. We are looking for a high-performing Associate in financial crime model development, tuning, and analytics to support the global strategy for screening systems across Name List Screening (NLS) and Transaction Screening (TS). This role offers the opportunity to work on key model initiatives within a cross-regional team and contribute directly to the banks risk mitigation efforts against financial crime. You will support model tuning and development efforts, support regulatory deliverables, and help collaborate with cross-functional teams including Compliance, Data Engineering, and Technology. Your key responsibilities Support the design and implementation of the model framework for name and transaction screening including coverage, data, model development and optimisation. Support key data initiatives, including but not limited to, data lineage, data quality controls, and data quality issues management. Document model logic and liaise with Compliance and Model Risk Management teams to ensure screening systems and scenarios adhere to all model governance standards Participate in research projects on innovative solutions to make detection models more pro-active Assist in model testing, calibration and performance monitoring. Ensure detailed metrics & reporting are developed to provide transparency and maintain effectiveness of name and transaction screening models. Support all examinations and reviews performed by regulators, monitors, and internal audit Your skills and experience Advanced degree (Masters or PhD) in a quantitative discipline (Mathematics, Computer Science, Data Science, Physics or Statistics) 13 years experience in data analytics or model development (internships included). Proficiency in designing, implementing (python, spark, cloud environments) and deploying quantitative models in a large financial institution, preferably in Front Office. Hands-on approach needed. Experience utilizing Machine Learning and Artificial Intelligence Experience with data and the ability to clearly articulate data requirements as they relate to NLS and TS, including comprehensiveness, quality, accuracy and integrity Knowledge of the banks products and services, including those related to corporate banking, investment banking, private banking, and asset management

Posted 2 days ago

Apply

3.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Linkedin logo

Job Title: Senior Manager - Product Quality Engineering Leader Career Level - E Introduction to role: Join our Commercial IT Data Analytics & AI (DAAI) team as a Product Quality Leader, where you will play a pivotal role in ensuring the quality and stability of our data platforms built on AWS services, Databricks, and Snaplogic. Based in Chennai GITC, you will drive the quality engineering strategy, lead a team of quality engineers, and contribute to the overall success of our data platform. Accountabilities : As the Product Quality Team Leader for data platforms, your key accountabilities will include leadership and mentorship, quality engineering standards, collaboration, technical expertise, and innovation and process improvement. You will lead the design, development, and maintenance of scalable and secure data infrastructure and tools to support the data analytics and data science teams. You will also develop and implement data and data engineering quality assurance strategies and plans tailored to data product build and operations. Essential Skills/Experience: Bachelor’s degree or equivalent in Computer Engineering, Computer Science, or a related field Proven experience in a product quality engineering or similar role, with at least 3 years of experience in managing and leading a team. Experience of working within a quality and compliance environment and application of policies, procedures, and guidelines A broad understanding of cloud architecture (preferably in AWS) Strong experience in Databricks, Pyspark and the AWS suite of applications (like S3, Redshift, Lambda, Glue, EMR). Proficiency in programming languages such as Python Experienced in Agile Development techniques and Methodologies. Solid understanding of data modelling, ETL processes and data warehousing concepts Excellent communication and leadership skills, with the ability to collaborate effectively with the technical and non-technical stakeholders. Experience with big data technologies such as Hadoop or Spark Certification in AWS or Databricks. Prior significant experience working in Pharmaceutical or Healthcare industry IT environment. When we put unexpected teams in the same room, we unleash bold thinking with the power to inspire life-changing medicines. In-person working gives us the platform we need to connect, work at pace and challenge perceptions. That's why we work, on average, a minimum of three days per week from the office. But that doesn't mean we're not flexible. We balance the expectation of being in the office while respecting individual flexibility. Join us in our unique and ambitious world. At AstraZeneca, we are committed to disrupting an industry and changing lives. Our work has a direct impact on patients, transforming our ability to develop life-changing medicines. We empower the business to perform at its peak and lead a new way of working, combining cutting-edge science with leading digital technology platforms and data. We dare to lead, applying our problem-solving mindset to identify and tackle opportunities across the whole enterprise. Our spirit of experimentation is lived every day through our events like hackathons. We enable AstraZeneca to perform at its peak by delivering world-class technology and data solutions. Are you ready to be part of a team that has the backing to innovate, disrupt an industry and change lives? Apply now to join us on this exciting journey! Show more Show less

Posted 2 days ago

Apply

4.0 years

0 Lacs

Noida, Uttar Pradesh, India

On-site

Linkedin logo

As a Social Media Executive, you will be at the forefront of our clients' digital presence, engaging with audiences, and driving brand awareness across various social platforms. Your role is pivotal in executing innovative social media strategies that align with our clients' objectives and resonate with their target audiences. Key Responsibilities - Develop, curate, and schedule engaging content (posts, stories, reels, etc.) for platforms like Instagram, Facebook, LinkedIn, Twitter, and TikTok, ensuring alignment with brand voice and campaign goals. Monitor and respond to comments, messages, and mentions promptly to foster positive relationships and enhance community engagement. Assist in the planning and execution of social media campaigns, including paid ads, influencer collaborations, and live event coverage, ensuring timely delivery and adherence to brand guidelines. Utilize tools like Google Analytics, Facebook Insights, and Hootsuite to track performance metrics, analyze campaign effectiveness, and provide actionable insights for optimization. Work closely with creative, design, and strategy teams to ensure cohesive and impactful social media initiatives that support overarching marketing objectives. Stay updated with the latest social media trends, platform updates, and industry best practices to keep our strategies innovative and competitive. Skills - Bachelor's/Master's degree in Marketing, Communications, Journalism, or a related field. 1–4 years in social media management, preferably within an advertising or digital agency setting. Familiarity with social media management tools (e.g., Hootsuite, Buffer), analytics platforms (e.g., Google Analytics), and basic graphic design tools (e.g., Canva, Adobe Spark). Strong written and verbal communication abilities, with a keen eye for detail and a creative flair. Ability to interpret data, generate insights, and adjust strategies to improve performance. Comfortable working in a fast-paced environment with multiple clients and deadlines. Location: Noida Please share your profile and portfolio at aanchal.mittal@magnongroup.com Note: The brief above is for reference purposes only and to get a basic understanding of the role. Magnon Group: Magnon is among the largest advertising, digital, and marketing-performance agency-groups in India. A part of the Fortune 200 global media corporation - Omnicom Group (NYSE: OMC), Magnon employs over 400 professionals across its offices in Delhi, Mumbai, and Bangalore. With three award-winning agencies, namely magnon designory, magnon eg+, and magnon sancus, the Group offers three-sixty-degree marketing solutions including advertising, digital, social, creative production, media, localization, linguistics, and marketing solutions’ outsourcing labs, for top global and Indian clients. Magnon works with some of the biggest brands in the world, across five continents, including several Global 500 companies. Magnon Group is an equal opportunities employer and welcomes applications from all sections of society and does not discriminate on grounds of race, color, gender, age, national origin, religion, sexual orientation, gender identity or expression, marital status, citizenship, disability, or any other basis as protected by applicable law. Show more Show less

Posted 2 days ago

Apply

5.0 years

7 - 17 Lacs

Hyderabad, Bengaluru

Work from Office

Naukri logo

Dear Aspirant, Please find the below mentioned job details and if you are suitable to the below role. please share me updated resume to smouni@deloitte.com along with the below details. Skill: Bigdata Developer Experience: 5 to 10 years Job Locations: Hyderabad Notice Period: Immediate to 30 days Details Required Name Skill Current Company Name Total years of experience Relevant Exp Current Location Preferred Location Notice Period Current CTC Expected CTC

Posted 2 days ago

Apply

2.0 - 4.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Linkedin logo

Position Overview Job Title: AI Engineer Location: Pune, India Role Description Indra is the central program driving the introduction and safe scaling of AI at DB. Focus is identify AI potential across various banking operations, driving funded use cases into production to create value and confidence and scale across the bank, creating selected shared services with embedded safety to enable low cost scale, developing an AI Workbench for developers for safe AI development at pace, and introducing AI controls whilst aiming to maintain time to market. What We’ll Offer You As part of our flexible scheme, here are just some of the benefits that you’ll enjoy Best in class leave policy Gender neutral parental leaves 100% reimbursement under childcare assistance benefit (gender neutral) Sponsorship for Industry relevant certifications and education Employee Assistance Program for you and your family members Comprehensive Hospitalization Insurance for you and your dependents Accident and Term life Insurance Complementary Health screening for 35 yrs. and above Your Key Responsibilities Model Deployment: Collaborate with data scientists to deploy machine learning models into production environments. Implement deployment strategies such as A/B testing or canary releases to ensure safe and controlled rollouts. Infrastructure Management: Design and manage the infrastructure required for hosting ML models, including cloud resources and on-premises servers. Utilize containerization technologies like Docker to package models and dependencies. Continuous Integration/Continuous Deployment (CI/CD): Develop and maintain CI/CD pipelines for automating the testing, integration, and deployment of ML models. Implement version control to track changes in both code and model artifacts. Monitoring and Logging: Establish monitoring solutions to track the performance and health of deployed models. Set up logging mechanisms to capture relevant information for debugging and auditing purposes. Scalability and Resource Optimization: Optimize ML infrastructure for scalability and cost-effectiveness. Implement auto-scaling mechanisms to handle varying workloads efficiently. Security and Compliance: Enforce security best practices to safeguard both the models and the data they process. Ensure compliance with industry regulations and data protection standards. Data Management: Oversee the management of data pipelines and data storage systems required for model training and inference. Implement data versioning and lineage tracking to maintain data integrity. Collaboration with Cross-Functional Teams: Work closely with data scientists, software engineers, and other stakeholders to understand model requirements and system constraints. Collaborate with DevOps teams to align MLOps practices with broader organizational goals. Performance Optimization: Continuously optimize and fine-tune ML models for better performance. Identify and address bottlenecks in the system to enhance overall efficiency. Documentation: Maintain clear and comprehensive documentation of MLOps processes, infrastructure, and model deployment procedures. Document best practices and troubleshooting guides for the team. Your Skills And Experience University degree in a technical or quantitative field (e.g., computer science, mathematics, physics, economics, etc.), preferably a Master’s or Doctoral degree 2-4 years of experience in applying AI, machine learning and/or data science in business and/or academia. Strong knowledge of at least one programming language (e.g., Python, JavaScript) and relevant data science or engineering framework (e.g., scikit-learn, TensorFlow, Spark, etc.). Ideally, practical experience in finance and banking Comfortable working with and managing uncertainty and ambiguity Excellent oral and written communication skills in English How We’ll Support You Training and development to help you excel in your career Coaching and support from experts in your team A culture of continuous learning to aid progression A range of flexible benefits that you can tailor to suit your needs About Us And Our Teams Please visit our company website for further information: https://www.db.com/company/company.htm We strive for a culture in which we are empowered to excel together every day. This includes acting responsibly, thinking commercially, taking initiative and working collaboratively. Together we share and celebrate the successes of our people. Together we are Deutsche Bank Group. We welcome applications from all people and promote a positive, fair and inclusive work environment. Show more Show less

Posted 2 days ago

Apply

2.0 years

0 Lacs

Gautam Buddha Nagar, Uttar Pradesh, India

On-site

Linkedin logo

We are seeking a dynamic and experienced Technical Trainer to join our engineering department. The ideal candidate will be responsible for designing and delivering technical training sessions to B.Tech students across various domains, ensuring they are industry-ready and equipped with practical, job-oriented skills. Role & Responsibility To train the students in new age technology (computer Science Engineering) to bridge the industry & academia gap leading to increase in the employability of the students. Knowledge Proven experience in devising technical training programs to UG/PG Engineering students in Higher Education Institutions To be abreast in latest software as per Industry standard & having knowledge of modern training techniques and tools to deliver the technical subjects To prepare training material (presentations, worksheets etc.) To execute training sessions, webinars, workshops for students To determine overall effectiveness of programs and make improvements Technical Skills (Subject Areas of delivering Training with Practical Approach) 1. Core Programming Skills Languages: C, Python, Java, C++, JavaScript 2. Web Development Frontend: HTML, CSS, JavaScript, React.js/Next.js Backend: Node.js, Express, Django, or Spring Boot Full-Stack: MERN stack (MongoDB, Express, React, Node.js) 3. Data Science & Machine Learning Languages: Python (NumPy, pandas, scikit-learn, TensorFlow/PyTorch) Tools: Jupyter Notebook, Google Colab, MLFlow 4. AI & Generative AI LLMs (Large Language Models): Understand how GPT, BERT, Llama models work Prompt Engineering Fine-tuning & RAG (Retrieval-Augmented Generation) Hugging Face Transformers, LangChain, OpenAI APIs 5. Cloud Computing & DevOps Cloud Platforms: AWS, Microsoft Azure, Google Cloud Platform (GCP) DevOps Tools: Docker, Kubernetes, GitHub Actions, Jenkins, Terraform CI/CD Pipelines: Automated testing and deployment 6. Cybersecurity Basics: OWASP Top 10, Network Security, Encryption, Firewalls Tools: Wireshark, Metasploit, Burp Suite 7. Mobile App Development Native: Kotlin (Android), Swift (iOS) Cross-platform: Flutter, React Native 8. Blockchain & Web3 Technologies: Ethereum, Solidity, Smart Contracts Frameworks: Hardhat, Truffle 9. Database & Big Data Databases: SQL (MySQL, PostgreSQL), NoSQL (MongoDB, Redis) Big Data Tools: Apache Hadoop, Spark, Kafka Qualification & Years of Experience as per norms: B.Tech./MCA/M.Tech (IT/CSE) from Top tier Institutes & reputed universities Industry Experience is desirable. Candidate must have minimum 2 years of training experience in the same domain. Show more Show less

Posted 2 days ago

Apply

3.0 years

0 Lacs

Bengaluru East, Karnataka, India

On-site

Linkedin logo

MoAt CommBank, we never lose sight of the role we play in other people’s financial wellbeing. Our focus is to help people and businesses move forward to progress. To make the right financial decisions and achieve their dreams, tar gets, and aspirations. Regardless of where you work within our organisation, your initiative, talent, ideas, and energy all contribute to the impact that we can make with our work. Together we can achieve great things. Job Title: Senior Associate Data Scientist Location: Bangalore Business & Team: Home Buying Decision Science Impact & contribution: The Senior Associate Data Scientist will use technical knowledge and understanding of business domain to deliver moderate or highly complex data science projects independently or with minimal guidance. You will also engage and collaborate with business stakeholders to clearly articulate findings to solve business problems. Roles & Responsibilities: Analyse complex data sets to extract insights and identify trends. Develop predictive models and algorithms to solve business problems. Work on deployment of models in production. Collaborate with cross-functional teams to understand requirements and deliver data-driven solutions. Clean, preprocess, and manipulate data for analysis through programming. Communicate findings and recommendations to stakeholders through reports and presentations. Stay updated with industry trends and best practices in data science. Contribute to the development and improvement of data infrastructure and processes. Design experiments and statistical analysis to validate hypotheses and improve models. Continuously learn and enhance skills in data science techniques and tools. Strongly support the adoption of data science across the organization. Identify problems in the products, services and operations of the bank and solve those with innovative research driven solutions. Essential Skills: Strong hands-on programming experience in Python (mandatory), R, SQL, Hive and Spark. More than 3 years of relevant experience. Ability to write well designed, modular and optimized code. Knowledge of H2O.ai, GitHub, Big Data and ML Engineering. Knowledge of commonly used data structures and algorithms. Good to have: Knowledge of Time Series, NLP and Deep Learning and Generative AI is preferred. Good to have: Knowledge and hands-on experience in developing solutions with Large Language Models. Must have been part of projects building and deploying predictive models in production (financial services domain preferred) involving large and complex data sets. Strong problem solving and critical thinking skills. Curious, fast learning capability and team player attitude is a must. Ability to communicate clearly and effectively. Demonstrated expertise through blogposts, research, participation in competitions, speaking opportunities, patents and paper publications. Most importantly - ability to identify and translate theories into real applications to solve practical problems. Preferred Skills: Good to have: Knowledge and hands-on data engineering or model deployment Experience in Data Science in either of Credit Risk, Pricing Modelling and Monitoring, Sales and Marketing, Campaign Analytics, Ecommerce Retail or banking products for retail or business banking is preferred. Solid foundation of Statistics and core ML algorithms at a mathematical (under the hood) level. Education Qualifications : Bachelor’s degree in Engineering in Computer Science/Information Technology. If you're already part of the Commonwealth Bank Group (including Bankwest, x15ventures), you'll need to apply through Sidekick to submit a valid application. We’re keen to support you with the next step in your career. We're aware of some accessibility issues on this site, particularly for screen reader users. We want to make finding your dream job as easy as possible, so if you require additional support please contact HR Direct on 1800 989 696. Advertising End Date: 25/06/2025 Show more Show less

Posted 2 days ago

Apply

0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

The Manager, Software Development Engineering leads a team of technical experts in successfully executing technology projects and solutions that align with the strategy and have broad business impact. The Manager, Software Development Engineering will work closely with development teams to identify and understand key features and their underlying functionality while also partnering closely with Product Management and UX Design. They may exercise influence and govern overall end-to-end software development life cycle related activities including management of support and maintenance releases, minor functional releases, and major projects. The Manager, Software Development Engineering will lead & provide technical guidance for process improvement programs while leveraging engineering best practices. In this people leadership role, Managers will recruit, train, motivate, coach, grow and develop Software Development Engineer team members at a variety of levels through their technical expertise and providing continuous feedback to ensure employee expectations, customer needs and product demands are met. About the Role: Lead and manage a team of engineers, providing mentorship and fostering a collaborative environment. Design, implement, and maintain scalable data pipelines and systems to support business analytics and data science initiatives. Collaborate with cross-functional teams to understand data requirements and ensure data solutions align with business goals. Ensure data quality, integrity, and security across all data processes and systems. Drive the adoption of best practices in data engineering, including coding standards, testing, and automation. Evaluate and integrate new technologies and tools to enhance data processing and analytics capabilities. Prepare and present reports on engineering activities, metrics, and project progress to stakeholders. About You: Proficiency in programming languages such as Python, Java, or Scala. Data Engineering with API & any programming language. Strong understanding of APIs and possess forward-looking knowledge of AI/ML tools or models and need to have some knowledge on software architecture. Experience with cloud platforms (e.g., AWS,Google Cloud) and big data technologies (e.g., Hadoop, Spark). Experience with Rest/Odata API's Strong problem-solving skills and the ability to work in a fast-paced environment. Excellent communication and interpersonal skills. Experience with data warehousing solutions such as BigQuery or snowflakes Familiarity with data visualization tools and techniques. Understanding of machine learning concepts and frameworks. What’s in it For You? Hybrid Work Model: We’ve adopted a flexible hybrid working environment (2-3 days a week in the office depending on the role) for our office-based roles while delivering a seamless experience that is digitally and physically connected. Flexibility & Work-Life Balance: Flex My Way is a set of supportive workplace policies designed to help manage personal and professional responsibilities, whether caring for family, giving back to the community, or finding time to refresh and reset. This builds upon our flexible work arrangements, including work from anywhere for up to 8 weeks per year, empowering employees to achieve a better work-life balance. Career Development and Growth: By fostering a culture of continuous learning and skill development, we prepare our talent to tackle tomorrow’s challenges and deliver real-world solutions. Our Grow My Way programming and skills-first approach ensures you have the tools and knowledge to grow, lead, and thrive in an AI-enabled future. Industry Competitive Benefits: We offer comprehensive benefit plans to include flexible vacation, two company-wide Mental Health Days off, access to the Headspace app, retirement savings, tuition reimbursement, employee incentive programs, and resources for mental, physical, and financial wellbeing. Culture: Globally recognized, award-winning reputation for inclusion and belonging, flexibility, work-life balance, and more. We live by our values: Obsess over our Customers, Compete to Win, Challenge (Y)our Thinking, Act Fast / Learn Fast, and Stronger Together. Social Impact: Make an impact in your community with our Social Impact Institute. We offer employees two paid volunteer days off annually and opportunities to get involved with pro-bono consulting projects and Environmental, Social, and Governance (ESG) initiatives. Making a Real-World Impact: We are one of the few companies globally that helps its customers pursue justice, truth, and transparency. Together, with the professionals and institutions we serve, we help uphold the rule of law, turn the wheels of commerce, catch bad actors, report the facts, and provide trusted, unbiased information to people all over the world. About Us Thomson Reuters informs the way forward by bringing together the trusted content and technology that people and organizations need to make the right decisions. We serve professionals across legal, tax, accounting, compliance, government, and media. Our products combine highly specialized software and insights to empower professionals with the data, intelligence, and solutions needed to make informed decisions, and to help institutions in their pursuit of justice, truth, and transparency. Reuters, part of Thomson Reuters, is a world leading provider of trusted journalism and news. We are powered by the talents of 26,000 employees across more than 70 countries, where everyone has a chance to contribute and grow professionally in flexible work environments. At a time when objectivity, accuracy, fairness, and transparency are under attack, we consider it our duty to pursue them. Sound exciting? Join us and help shape the industries that move society forward. As a global business, we rely on the unique backgrounds, perspectives, and experiences of all employees to deliver on our business goals. To ensure we can do that, we seek talented, qualified employees in all our operations around the world regardless of race, color, sex/gender, including pregnancy, gender identity and expression, national origin, religion, sexual orientation, disability, age, marital status, citizen status, veteran status, or any other protected classification under applicable law. Thomson Reuters is proud to be an Equal Employment Opportunity Employer providing a drug-free workplace. We also make reasonable accommodations for qualified individuals with disabilities and for sincerely held religious beliefs in accordance with applicable law. More information on requesting an accommodation here. Learn more on how to protect yourself from fraudulent job postings here. More information about Thomson Reuters can be found on thomsonreuters.com. Show more Show less

Posted 2 days ago

Apply

4.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Linkedin logo

Req ID: 324638 NTT DATA strives to hire exceptional, innovative and passionate individuals who want to grow with us. If you want to be part of an inclusive, adaptable, and forward-thinking organization, apply now. We are currently seeking a Data Engineer to join our team in Chennai, Tamil Nādu (IN-TN), India (IN). Key Responsibilities: Design and implement tailored data solutions to meet customer needs and use cases, spanning from streaming to data lakes, analytics, and beyond within a dynamically evolving technical stack. Provide thought leadership by recommending the most appropriate technologies and solutions for a given use case, covering the entire spectrum from the application layer to infrastructure. Demonstrate proficiency in coding skills, utilizing languages such as Python, Java, and Scala to efficiently move solutions into production while prioritizing performance, security, scalability, and robust data integrations. Collaborate seamlessly across diverse technical stacks, including Cloudera, Databricks, Snowflake, and AWS. Develop and deliver detailed presentations to effectively communicate complex technical concepts. Generate comprehensive solution documentation, including sequence diagrams, class hierarchies, logical system views, etc. Adhere to Agile practices throughout the solution development process. Design, build, and deploy databases and data stores to support organizational requirements. Basic Qualifications: 4+ years of experience supporting Software Engineering, Data Engineering, or Data Analytics projects. 2+ years of experience leading a team supporting data related projects to develop end-to-end technical solutions. Experience with Informatica, Python, Databricks, Azure Data Engineer Ability to travel at least 25%. Preferred Skills: Demonstrate production experience in core data platforms such as Snowflake, Databricks, AWS, Azure, GCP, Hadoop, and more. Possess hands-on knowledge of Cloud and Distributed Data Storage, including expertise in HDFS, S3, ADLS, GCS, Kudu, ElasticSearch/Solr, Cassandra, or other NoSQL storage systems. Exhibit a strong understanding of Data integration technologies, encompassing Informatica, Spark, Kafka, eventing/streaming, Streamsets, NiFi, AWS Data Migration Services, Azure DataFactory, Google DataProc. Showcase professional written and verbal communication skills to effectively convey complex technical concepts. Undergraduate or Graduate degree preferred About NTT DATA NTT DATA is a $30 billion trusted global innovator of business and technology services. We serve 75% of the Fortune Global 100 and are committed to helping clients innovate, optimize and transform for long term success. As a Global Top Employer, we have diverse experts in more than 50 countries and a robust partner ecosystem of established and start-up companies. Our services include business and technology consulting, data and artificial intelligence, industry solutions, as well as the development, implementation and management of applications, infrastructure and connectivity. We are one of the leading providers of digital and AI infrastructure in the world. NTT DATA is a part of NTT Group, which invests over $3.6 billion each year in R&D to help organizations and society move confidently and sustainably into the digital future. Visit us at us.nttdata.com NTT DATA endeavors to make https://us.nttdata.com accessible to any and all users. If you would like to contact us regarding the accessibility of our website or need assistance completing the application process, please contact us at https://us.nttdata.com/en/contact-us . This contact information is for accommodation requests only and cannot be used to inquire about the status of applications. NTT DATA is an equal opportunity employer. Qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, disability or protected veteran status. For our EEO Policy Statement, please click here . If you'd like more information on your EEO rights under the law, please click here . For Pay Transparency information, please click here . Show more Show less

Posted 2 days ago

Apply

4.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Linkedin logo

Req ID: 324631 NTT DATA strives to hire exceptional, innovative and passionate individuals who want to grow with us. If you want to be part of an inclusive, adaptable, and forward-thinking organization, apply now. We are currently seeking a Data Engineer to join our team in Chennai, Tamil Nādu (IN-TN), India (IN). Key Responsibilities: Design and implement tailored data solutions to meet customer needs and use cases, spanning from streaming to data lakes, analytics, and beyond within a dynamically evolving technical stack. Provide thought leadership by recommending the most appropriate technologies and solutions for a given use case, covering the entire spectrum from the application layer to infrastructure. Demonstrate proficiency in coding skills, utilizing languages such as Python, Java, and Scala to efficiently move solutions into production while prioritizing performance, security, scalability, and robust data integrations. Collaborate seamlessly across diverse technical stacks, including Cloudera, Databricks, Snowflake, and AWS. Develop and deliver detailed presentations to effectively communicate complex technical concepts. Generate comprehensive solution documentation, including sequence diagrams, class hierarchies, logical system views, etc. Adhere to Agile practices throughout the solution development process. Design, build, and deploy databases and data stores to support organizational requirements. Basic Qualifications: 4+ years of experience supporting Software Engineering, Data Engineering, or Data Analytics projects. 2+ years of experience leading a team supporting data related projects to develop end-to-end technical solutions. Experience with Informatica, Python, Databricks, Azure Data Engineer Ability to travel at least 25%. Preferred Skills: Demonstrate production experience in core data platforms such as Snowflake, Databricks, AWS, Azure, GCP, Hadoop, and more. Possess hands-on knowledge of Cloud and Distributed Data Storage, including expertise in HDFS, S3, ADLS, GCS, Kudu, ElasticSearch/Solr, Cassandra, or other NoSQL storage systems. Exhibit a strong understanding of Data integration technologies, encompassing Informatica, Spark, Kafka, eventing/streaming, Streamsets, NiFi, AWS Data Migration Services, Azure DataFactory, Google DataProc. Showcase professional written and verbal communication skills to effectively convey complex technical concepts. Undergraduate or Graduate degree preferred About NTT DATA NTT DATA is a $30 billion trusted global innovator of business and technology services. We serve 75% of the Fortune Global 100 and are committed to helping clients innovate, optimize and transform for long term success. As a Global Top Employer, we have diverse experts in more than 50 countries and a robust partner ecosystem of established and start-up companies. Our services include business and technology consulting, data and artificial intelligence, industry solutions, as well as the development, implementation and management of applications, infrastructure and connectivity. We are one of the leading providers of digital and AI infrastructure in the world. NTT DATA is a part of NTT Group, which invests over $3.6 billion each year in R&D to help organizations and society move confidently and sustainably into the digital future. Visit us at us.nttdata.com NTT DATA endeavors to make https://us.nttdata.com accessible to any and all users. If you would like to contact us regarding the accessibility of our website or need assistance completing the application process, please contact us at https://us.nttdata.com/en/contact-us . This contact information is for accommodation requests only and cannot be used to inquire about the status of applications. NTT DATA is an equal opportunity employer. Qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, disability or protected veteran status. For our EEO Policy Statement, please click here . If you'd like more information on your EEO rights under the law, please click here . For Pay Transparency information, please click here . Show more Show less

Posted 2 days ago

Apply

4.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Linkedin logo

Req ID: 324632 NTT DATA strives to hire exceptional, innovative and passionate individuals who want to grow with us. If you want to be part of an inclusive, adaptable, and forward-thinking organization, apply now. We are currently seeking a Data Engineer to join our team in Chennai, Tamil Nādu (IN-TN), India (IN). Key Responsibilities: Design and implement tailored data solutions to meet customer needs and use cases, spanning from streaming to data lakes, analytics, and beyond within a dynamically evolving technical stack. Provide thought leadership by recommending the most appropriate technologies and solutions for a given use case, covering the entire spectrum from the application layer to infrastructure. Demonstrate proficiency in coding skills, utilizing languages such as Python, Java, and Scala to efficiently move solutions into production while prioritizing performance, security, scalability, and robust data integrations. Collaborate seamlessly across diverse technical stacks, including Cloudera, Databricks, Snowflake, and AWS. Develop and deliver detailed presentations to effectively communicate complex technical concepts. Generate comprehensive solution documentation, including sequence diagrams, class hierarchies, logical system views, etc. Adhere to Agile practices throughout the solution development process. Design, build, and deploy databases and data stores to support organizational requirements. Basic Qualifications: 4+ years of experience supporting Software Engineering, Data Engineering, or Data Analytics projects. 2+ years of experience leading a team supporting data related projects to develop end-to-end technical solutions. Experience with Informatica, Python, Databricks, Azure Data Engineer Ability to travel at least 25%. Preferred Skills: Demonstrate production experience in core data platforms such as Snowflake, Databricks, AWS, Azure, GCP, Hadoop, and more. Possess hands-on knowledge of Cloud and Distributed Data Storage, including expertise in HDFS, S3, ADLS, GCS, Kudu, ElasticSearch/Solr, Cassandra, or other NoSQL storage systems. Exhibit a strong understanding of Data integration technologies, encompassing Informatica, Spark, Kafka, eventing/streaming, Streamsets, NiFi, AWS Data Migration Services, Azure DataFactory, Google DataProc. Showcase professional written and verbal communication skills to effectively convey complex technical concepts. Undergraduate or Graduate degree preferred About NTT DATA NTT DATA is a $30 billion trusted global innovator of business and technology services. We serve 75% of the Fortune Global 100 and are committed to helping clients innovate, optimize and transform for long term success. As a Global Top Employer, we have diverse experts in more than 50 countries and a robust partner ecosystem of established and start-up companies. Our services include business and technology consulting, data and artificial intelligence, industry solutions, as well as the development, implementation and management of applications, infrastructure and connectivity. We are one of the leading providers of digital and AI infrastructure in the world. NTT DATA is a part of NTT Group, which invests over $3.6 billion each year in R&D to help organizations and society move confidently and sustainably into the digital future. Visit us at us.nttdata.com NTT DATA endeavors to make https://us.nttdata.com accessible to any and all users. If you would like to contact us regarding the accessibility of our website or need assistance completing the application process, please contact us at https://us.nttdata.com/en/contact-us . This contact information is for accommodation requests only and cannot be used to inquire about the status of applications. NTT DATA is an equal opportunity employer. Qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, disability or protected veteran status. For our EEO Policy Statement, please click here . If you'd like more information on your EEO rights under the law, please click here . For Pay Transparency information, please click here . Show more Show less

Posted 2 days ago

Apply

Exploring Spark Jobs in India

The demand for professionals with expertise in Spark is on the rise in India. Spark, an open-source distributed computing system, is widely used for big data processing and analytics. Job seekers in India looking to explore opportunities in Spark can find a variety of roles in different industries.

Top Hiring Locations in India

  1. Bangalore
  2. Pune
  3. Hyderabad
  4. Chennai
  5. Mumbai

These cities have a high concentration of tech companies and startups actively hiring for Spark roles.

Average Salary Range

The average salary range for Spark professionals in India varies based on experience level: - Entry-level: INR 4-6 lakhs per annum - Mid-level: INR 8-12 lakhs per annum - Experienced: INR 15-25 lakhs per annum

Salaries may vary based on the company, location, and specific job requirements.

Career Path

In the field of Spark, a typical career progression may look like: - Junior Developer - Senior Developer - Tech Lead - Architect

Advancing in this career path often requires gaining experience, acquiring additional skills, and taking on more responsibilities.

Related Skills

Apart from proficiency in Spark, professionals in this field are often expected to have knowledge or experience in: - Hadoop - Java or Scala programming - Data processing and analytics - SQL databases

Having a combination of these skills can make a candidate more competitive in the job market.

Interview Questions

  • What is Apache Spark and how is it different from Hadoop? (basic)
  • Explain the difference between RDD, DataFrame, and Dataset in Spark. (medium)
  • How does Spark handle fault tolerance? (medium)
  • What is lazy evaluation in Spark? (basic)
  • Explain the concept of transformations and actions in Spark. (basic)
  • What are the different deployment modes in Spark? (medium)
  • How can you optimize the performance of a Spark job? (advanced)
  • What is the role of a Spark executor? (medium)
  • How does Spark handle memory management? (medium)
  • Explain the Spark shuffle operation. (medium)
  • What are the different types of joins in Spark? (medium)
  • How can you debug a Spark application? (medium)
  • Explain the concept of checkpointing in Spark. (medium)
  • What is lineage in Spark? (basic)
  • How can you monitor and manage a Spark application? (medium)
  • What is the significance of the Spark Driver in a Spark application? (medium)
  • How does Spark SQL differ from traditional SQL? (medium)
  • Explain the concept of broadcast variables in Spark. (medium)
  • What is the purpose of the SparkContext in Spark? (basic)
  • How does Spark handle data partitioning? (medium)
  • Explain the concept of window functions in Spark SQL. (advanced)
  • How can you handle skewed data in Spark? (advanced)
  • What is the use of accumulators in Spark? (advanced)
  • How can you schedule Spark jobs using Apache Oozie? (advanced)
  • Explain the process of Spark job submission and execution. (basic)

Closing Remark

As you explore opportunities in Spark jobs in India, remember to prepare thoroughly for interviews and showcase your expertise confidently. With the right skills and knowledge, you can excel in this growing field and advance your career in the tech industry. Good luck with your job search!

cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies