Home
Jobs

9426 Spark Jobs - Page 9

Filter
Filter Interviews
Min: 0 years
Max: 25 years
Min: ₹0
Max: ₹10000000
Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

5.0 years

0 Lacs

India

On-site

Linkedin logo

Our technology services client is seeking multiple Data Analytics with SQL, Databricks, ADF to join their team on a contract basis. These positions offer a strong potential for conversion to full-time employment upon completion of the initial contract period. Below are further details about the role: Role: Data Analytics with SQL, Databricks, ADF Mandatory Skills: SQL, Databricks, ADF Experience: 5-7 Years Location: Pan India Notice Period: Immediate- 15 Days Required Qualifications: 5 years of software solution development using agile, DevOps, product model that includes designing, developing, and implementing large-scale applications or data engineering solutions. 5+ years of Data Analytics experience using SQL 5+ years full-stack development experience, preferably in Azure 5+ years of cloud development (prefer Microsoft Azure) including Azure EventHub, Azure Data Factory, Azure Functions, ADX, ASA, Azure Databricks, Azure DevOps, Azure Blob Storage, Azure Power Apps, and Power BI. 1+ years of FAST API experience is a plus Airline Industry Experience Expertise with the Azure Technology stack for data management, data ingestion, capture, processing, curation and creating consumption layers. Azure Development Track Certification (preferred) Spark Certification (preferred) If you are interested, kindly share the updated resume to Sathwik@s3staff.com Show more Show less

Posted 1 day ago

Apply

0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Linkedin logo

Join us as a Data Engineer This is an exciting opportunity to use your technical expertise to collaborate with colleagues and build effortless, digital first customer experiences You’ll be simplifying the bank by developing innovative data driven solutions, using insight to be commercially successful, and keeping our customers’ and the bank’s data safe and secure Participating actively in the data engineering community, you’ll deliver opportunities to support the bank’s strategic direction while building your network across the bank We're offering this role at associate level What you'll do As a Data Engineer, you’ll play a key role in driving value for our customers by building data solutions. You’ll be carrying out data engineering tasks to build, maintain, test and optimise a scalable data architecture, as well as carrying out data extractions, transforming data to make it usable to data analysts and scientists, and loading data into data platforms. You’ll Also Be Developing comprehensive knowledge of the bank’s data structures and metrics, advocating change where needed for product development Practicing DevOps adoption in the delivery of data engineering, proactively performing root cause analysis and resolving issues Collaborating closely with core technology and architecture teams in the bank to build data knowledge and data solutions Developing a clear understanding of data platform cost levers to build cost effective and strategic solutions Sourcing new data using the most appropriate tooling and integrating it into the overall solution to deliver for our customers The skills you'll need To be successful in this role, you’ll need five plus years of good understanding of data usage and dependencies with wider teams and the end customer, as well as experience of extracting value and features from large scale data. You'll also perform database migrations from soon-to-be decommissioned platforms onto strategic analytical platforms in a controlled and structured manner. You’ll Also Demonstrate Experience of tableau, PowerBI, Snowflake, PostGres, MongoDB, Python, Spark, Autosys, Airflow Experience of using programming languages alongside knowledge of data and software engineering fundamentals Experience in AWS cloud eco systems Strong communication skills with the ability to proactively engage with a wide range of stakeholders Show more Show less

Posted 1 day ago

Apply

3.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

Description As part of the Last Mile Science & Technology organization, you’ll partner closely with Product Managers, Data Scientists, and Software Engineers to drive improvements in Amazon's Last Mile delivery network. You will leverage data and analytics to generate insights that accelerate the scale, efficiency, and quality of the routes we build for our drivers through our end-to-end last mile planning systems. You will develop complex data engineering solutions using AWS technology stack (S3, Glue, IAM, Redshift, Athena). You should have deep expertise and passion in working with large data sets, building complex data processes, performance tuning, bringing data from disparate data stores and programmatically identifying patterns. You will work with business owners to develop and define key business questions and requirements. You will provide guidance and support for other engineers with industry best practices and direction. Analytical ingenuity and leadership, business acumen, effective communication capabilities, and the ability to work effectively with cross-functional teams in a fast-paced environment are critical skills for this role. Key job responsibilities Design, implement, and support data warehouse / data lake infrastructure using AWS big data stack, Python, Redshift, Quicksight, Glue/lake formation, EMR/Spark/Scala, Athena etc. Extract huge volumes of structured and unstructured data from various sources (Relational /Non-relational/No-SQL database) and message streams and construct complex analyses. Develop and manage ETLs to source data from various systems and create unified data model for analytics and reporting Perform detailed source-system analysis, source-to-target data analysis, and transformation analysis Participate in the full development cycle for ETL: design, implementation, validation, documentation, and maintenance. Basic Qualifications 3+ years of data engineering experience Experience with data modeling, warehousing and building ETL pipelines Experience with one or more scripting language (e.g., Python, KornShell) 3+ years of analyzing and interpreting data with Redshift, Oracle, NoSQL etc. experience Preferred Qualifications Experience with big data technologies such as: Hadoop, Hive, Spark, EMR Experience with big data processing technology (e.g., Hadoop or ApacheSpark), data warehouse technical architecture, infrastructure components, ETL, and reporting/analytic tools and environments Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace accommodation or adjustment during the application and hiring process, including support for the interview or onboarding process, please visit https://amazon.jobs/content/en/how-we-hire/accommodations for more information. If the country/region you’re applying in isn’t listed, please contact your Recruiting Partner. Company - ADCI HYD 13 SEZ Job ID: A3009499 Show more Show less

Posted 1 day ago

Apply

3.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

Description Design, implement, and support data warehouse / data lake infrastructure using AWS big data stack, Python, Redshift, Quicksight, Glue/lake formation, EMR/Spark/Scala, Athena etc. Extract huge volumes of structured and unstructured data from various sources (Relational /Non-relational/No-SQL database) and message streams and construct complex analyses. Develop and manage ETLs to source data from various systems and create unified data model for analytics and reporting Perform detailed source-system analysis, source-to-target data analysis, and transformation analysis Participate in the full development cycle for ETL: design, implementation, validation, documentation, and maintenance. Basic Qualifications 3+ years of data engineering experience Experience with data modeling, warehousing and building ETL pipelines 4+ years of SQL experience Experience in at least one modern scripting or programming language, such as Python, Java, Scala, or NodeJS Experience as a data engineer or related specialty (e.g., software engineer, business intelligence engineer, data scientist) with a track record of manipulating, processing, and extracting value from large datasets Preferred Qualifications Experience with AWS technologies like Redshift, S3, AWS Glue, EMR, Kinesis, FireHose, Lambda, and IAM roles and permissions Experience with non-relational databases / data stores (object storage, document or key-value stores, graph databases, column-family databases) Experience building/operating highly available, distributed systems of data extraction, ingestion, and processing of large data sets Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace accommodation or adjustment during the application and hiring process, including support for the interview or onboarding process, please visit https://amazon.jobs/content/en/how-we-hire/accommodations for more information. If the country/region you’re applying in isn’t listed, please contact your Recruiting Partner. Company - ADCI HYD 13 SEZ Job ID: A3009501 Show more Show less

Posted 1 day ago

Apply

0 years

0 Lacs

Pune, Maharashtra, India

On-site

Linkedin logo

Job Summary JOB DESCRIPTION We are seeking an experienced and innovative Data Scientist to join our team. The ideal candidate will leverage data-driven insights to solve complex problems, optimize business processes, and contribute to strategic decision-making. This role requires expertise in statistical analysis, machine learning, and data visualization to extract valuable insights from large datasets. Responsibilities Key Responsibilities: Collect, clean, and preprocess structuredandunstructureddata from various sources. Apply statisticalmethods and machinelearningalgorithms to analyze data and identify patterns. Develop predictive and prescriptive models to support business goals. Collaborate with stakeholders to define data-driven solutions for business challenges. Visualize data insights using tools like PowerBI , Tableau , or Matplotlib . Perform A / Btesting and evaluate model accuracy using appropriate metrics. Optimize machine learning models for scalability and performance. Document processes and communicate findings to non-technical stakeholders. Stay updated with advancements in data science techniques and tools. Qualifications Required Skills and Qualifications: Proficiency in programming languages like Python , R , or Scala . Strong knowledge of machinelearningframeworks such as TensorFlow , PyTorch , or Scikit − learn . Experience with SQL and NoSQLdatabases for data querying and manipulation. Understanding of bigdatatechnologies like Hadoop , Spark , or Kafka . Ability to perform statisticalanalysis and interpret results. Experience with datavisualizationlibraries like Seaborn , Plotly , or D 3. js . Excellent problem-solving and analytical skills. Strong communication skills to present findings to technical and non-technical audiences. Preferred Qualifications Master's or PhD in DataScience , Statistics , ComputerScience , or a related field. Experience with cloudplatforms (e.g., AWS, Azure, GCP) for data processing and model deployment. Knowledge of NLP ( NaturalLanguageProcessing ) and computervision . Familiarity with DevOpspractices and containerizationtools like Docker and Kubernetes . Exposure to time − seriesanalysis and forecastingtechniques . Certification in data science or machine learning tools is a plus. About Us ABOUT US Bristlecone is the leading provider of AI-powered application transformation services for the connected supply chain. We empower our customers with speed, visibility, automation, and resiliency – to thrive on change. Our transformative solutions in Digital Logistics, Cognitive Manufacturing, Autonomous Planning, Smart Procurement and Digitalization are positioned around key industry pillars and delivered through a comprehensive portfolio of services spanning digital strategy, design and build, and implementation across a range of technology platforms. Bristlecone is ranked among the top ten leaders in supply chain services by Gartner. We are headquartered in San Jose, California, with locations across North America, Europe and Asia, and over 2,500 consultants. Bristlecone is part of the $19.4 billion Mahindra Group. Equal Opportunity Employer Bristlecone is an equal opportunity employer. All applicants will be considered for employment without attention to race, color, religion, sex, sexual orientation, gender identity, national origin, veteran or disability status . Information Security Responsibilities Understand and adhere to Information Security policies, guidelines and procedure, practice them for protection of organizational data and Information System. Take part in information security training and act while handling information. Report all suspected security and policy breach to InfoSec team or appropriate authority (CISO). Understand and adhere to the additional information security responsibilities as part of the assigned job role. Show more Show less

Posted 1 day ago

Apply

0 years

0 Lacs

Gurgaon, Haryana, India

On-site

Linkedin logo

Our Purpose Mastercard powers economies and empowers people in 200+ countries and territories worldwide. Together with our customers, we’re helping build a sustainable economy where everyone can prosper. We support a wide range of digital payments choices, making transactions secure, simple, smart and accessible. Our technology and innovation, partnerships and networks combine to deliver a unique set of products and services that help people, businesses and governments realize their greatest potential. Title And Summary Senior Analyst, Big Data Analytics & Engineering Overview The Services Portfolio Management team is looking for a Senior Analyst, Big Data Analytics & Engineering to build a new product that will serve actionable insights to all Programs under Mastercard Services Portfolio. The ideal candidate should have a proven ability to analyze large data sets and effectively communicate their findings. They must have prior experience in product development, be highly motivated, innovative, intellectually curious, analytical, and possess an entrepreneurial mindset. Role Build a solution stack for a dashboard including front end in PowerBI/Tableau, data integrations, database models, ETL jobs, etc. Identify opportunities to introduce Automation and AI tools into workflows. Translate product requirements into tangible technical solution specifications and high quality, on time deliverables. Partner with other automation specialists across Services I&E to learn and build best practices for building and running the Portfolio Cockpit tool. Identify gaps and conceptualize new product/platform capabilities as required. Proactively identify automation opportunities All About You Experience with data analysis, with a background in building KPIs and reporting. PowerBI experience preferred or other reporting tools like Tableau, DOMO. Experience with PowerApps or other No/Low-code app development tools is a plus Experience in systems analysis and application design and development. Ability to deliver technology products/services in a high growth environment where priorities change rapidly. Proactive self-starter seeking initiatives for advancement. Understanding of data architecture and some experience in building logical/conceptual data models or creating data mapping documentation. Experience with data validation, quality control, and cleansing processes for new and existing data sources. Strong problem-solving, quantitative, and analytical skills. Advanced SQL skills, ability to write optimized queries for large data sets. Exposure to Python, Scala, Spark, Cloud, and other related technologies is advantageous. In-depth technical knowledge, and ability to learn new technologies. Attention to detail and quality. Team player with effective communication skills. Must be able to interact with management, internal stakeholders, and collect requirements. Must be able to perform in a team, use judgment, and operate under ambiguity. Experience in leveraging generative AI tools to enhance day-to-day tasks is beneficial. Corporate Security Responsibility All activities involving access to Mastercard assets, information, and networks comes with an inherent risk to the organization and, therefore, it is expected that every person working for, or on behalf of, Mastercard is responsible for information security and must: Abide by Mastercard’s security policies and practices; Ensure the confidentiality and integrity of the information being accessed; Report any suspected information security violation or breach, and Complete all periodic mandatory security trainings in accordance with Mastercard’s guidelines. R-251036 Show more Show less

Posted 1 day ago

Apply

2.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Linkedin logo

About Yulu Yulu is India’s largest shared electric mobility-as-a-service company. Yulu’s mission is to reduce traffic congestion and air pollution by running smart, shared, and small-sized electric vehicles. Yulu is led by a mission-driven & seasoned founding team and has won several prestigious awards for its impact and innovation. Yulu is currently enabling daily commuters for short-distance movements and helping gig-workers to deliver goods for the last mile with its eco-friendly rides at pocket-friendly prices, and reducing the carbon footprint. Yulu is excited to welcome people with high integrity, commitment, the ability to collaborate and take ownership, high curiosity, and an appetite for taking intelligent risks. If our mission brings a spark into your eyes and if you’d like to join a passionate team that’s committed to transforming how people commute, work, and explore their cities - Come, join the #Unstoppable Yulu tribe! Stay updated on the latest news from Yulu at https://www.yulu.bike/newsroom and on our website, https://www.yulu.bike/. What you’ll do? Your experience speaks volumes: You have 2+ years of hands-on experience in product design, specifically for mobile and web platforms, with a strong portfolio of shipped products. You have a user-first approach: You believe in human-centred design, conducting research, usability testing, and iterating based on real user feedback to refine your work. You have strategic & data-driven thinking: You don’t just design; you solve problems by defining the right challenges, leveraging data insights, and crafting scalable, impactful solutions. You have a collaborative mindset: You thrive in cross-functional teams, working closely with engineers, product managers, and researchers to create user-centric, business-aligned designs. You passionately design with Zen principles: You craft simple, balanced, and intuitive experiences that evoke deep, visceral emotions in our users at every interaction. Who you are? You will take full ownership of your work, ensuring every detail is meticulously crafted—from initial sketches to high-fidelity final designs. You will move fast to generate multiple concepts and prototypes, knowing when to explore further and when to pivot to a new approach based on user testing and feedback. You will collaborate closely with engineers, product managers, and stakeholders to align design strategies with business goals and technical feasibility. You will consider existing insights, technical constraints, business needs, and platform demands to create informed, data-driven solutions. You will play a crucial role in fostering a collaborative, high-performing design culture. We assure you Be a part of an innovative company that values professional growth, trustworthy colleagues, a fun environment in the office, and well-being for employees Work on impactful HR strategies that directly shape the workforce and make positive contributions to the business A culture that fosters growth, integrity, and innovation Show more Show less

Posted 1 day ago

Apply

8.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

About The Role As a Sr. Data Engineer in the Sales Automation Engineering team you should be able to work through the different areas of Data Engineering & Data Architecture including the following: Data Migration - From Hive/other DBs to Salesforce/other DBs and vice versa Data Modeling - Understand existing sources & data models and identify the gaps and building future state architecture Data Pipelines - Building Data Pipelines for several Data Mart/Data Warehouse and Reporting requirements Data Governance - Build the framework for DG & Data Quality Profiling & Reporting What The Candidate Will Need / Bonus Points ---- What the Candidate Will Do ---- Demonstrate strong knowledge of and ability to operationalize, leading data technologies and best practices. Collaborate with internal business units and data teams on business requirements, data access, processing/transformation and reporting needs and leverage existing and new tools to provide solutions. Build dimensional data models to support business requirements and reporting needs. Design, build and automate the deployment of data pipelines and applications to support reporting and data requirements. Research and recommend technologies and processes to support rapid scale and future state growth initiatives from the data front. Prioritize business needs, leadership questions, and ad-hoc requests for on-time delivery. Collaborate on architecture and technical design discussions to identify and evaluate high impact process initiatives. Work with the team to implement data governance, access control and identify and reduce security risks. Perform and participate in code reviews, peer inspections and technical design/specifications. Develop performance metrics to establish process success and work cross-functionally to consistently and accurately measure success over time Delivers measurable business process improvements while re-engineering key processes and capabilities and maps to future-state vision Prepare documentations and specifications on detailed design. Be able to work in a globally distributed team in an Agile/Scrum approach. Basic Qualifications Bachelor's Degree in computer science or similar technical field of study or equivalent practical experience. 8+ years professional software development experience, including experience in the Data Engineering & Architecture space Interact with product managers, and business stakeholders to understand data needs and help build data infrastructure that scales across the company Very strong SQL skills - know advanced level SQL coding (windows functions, CTEs, dynamic variables, Hierarchical queries, Materialized views etc) Experience with data-driven architecture and systems design knowledge of Hadoop related technologies such as HDFS, Apache Spark, Apache Flink, Hive, and Presto. Good hands on experience with Object Oriented programming languages like Python. Proven experience in large-scale distributed storage and database systems (SQL or NoSQL, e.g. HIVE, MySQL, Cassandra) and data warehousing architecture and data modeling. Working experience in cloud technologies like GCP, AWS, Azure Knowledge of reporting tools like Tableau and/or other BI tools. Preferred Qualifications Python libraries (Apache spark, Scala) Working experience in cloud technologies like GCP, AWS, Azure Show more Show less

Posted 1 day ago

Apply

0 years

0 Lacs

Pune, Maharashtra, India

On-site

Linkedin logo

Our Purpose Mastercard powers economies and empowers people in 200+ countries and territories worldwide. Together with our customers, we’re helping build a sustainable economy where everyone can prosper. We support a wide range of digital payments choices, making transactions secure, simple, smart and accessible. Our technology and innovation, partnerships and networks combine to deliver a unique set of products and services that help people, businesses and governments realize their greatest potential. Title And Summary Senior Analyst, Big Data Analytics & Engineering Overview The Services Portfolio Management team is looking for a Senior Analyst, Big Data Analytics & Engineering to build a new product that will serve actionable insights to all Programs under Mastercard Services Portfolio. The ideal candidate should have a proven ability to analyze large data sets and effectively communicate their findings. They must have prior experience in product development, be highly motivated, innovative, intellectually curious, analytical, and possess an entrepreneurial mindset. Role Build a solution stack for a dashboard including front end in PowerBI/Tableau, data integrations, database models, ETL jobs, etc. Identify opportunities to introduce Automation and AI tools into workflows. Translate product requirements into tangible technical solution specifications and high quality, on time deliverables. Partner with other automation specialists across Services I&E to learn and build best practices for building and running the Portfolio Cockpit tool. Identify gaps and conceptualize new product/platform capabilities as required. Proactively identify automation opportunities All About You Experience with data analysis, with a background in building KPIs and reporting. PowerBI experience preferred or other reporting tools like Tableau, DOMO. Experience with PowerApps or other No/Low-code app development tools is a plus Experience in systems analysis and application design and development. Ability to deliver technology products/services in a high growth environment where priorities change rapidly. Proactive self-starter seeking initiatives for advancement. Understanding of data architecture and some experience in building logical/conceptual data models or creating data mapping documentation. Experience with data validation, quality control, and cleansing processes for new and existing data sources. Strong problem-solving, quantitative, and analytical skills. Advanced SQL skills, ability to write optimized queries for large data sets. Exposure to Python, Scala, Spark, Cloud, and other related technologies is advantageous. In-depth technical knowledge, and ability to learn new technologies. Attention to detail and quality. Team player with effective communication skills. Must be able to interact with management, internal stakeholders, and collect requirements. Must be able to perform in a team, use judgment, and operate under ambiguity. Experience in leveraging generative AI tools to enhance day-to-day tasks is beneficial. Corporate Security Responsibility All activities involving access to Mastercard assets, information, and networks comes with an inherent risk to the organization and, therefore, it is expected that every person working for, or on behalf of, Mastercard is responsible for information security and must: Abide by Mastercard’s security policies and practices; Ensure the confidentiality and integrity of the information being accessed; Report any suspected information security violation or breach, and Complete all periodic mandatory security trainings in accordance with Mastercard’s guidelines. R-251036 Show more Show less

Posted 1 day ago

Apply

5.0 years

0 Lacs

Trivandrum, Kerala, India

On-site

Linkedin logo

What you’ll do? Design, develop, and operate high scale applications across the full engineering stack. Design, develop, test, deploy, maintain, and improve software. Apply modern software development practices (serverless computing, microservices architecture, CI/CD, infrastructure-as-code, etc.) Work across teams to integrate our systems with existing internal systems, Data Fabric, CSA Toolset. Participate in technology roadmap and architecture discussions to turn business requirements and vision into reality. Participate in a tight-knit, globally distributed engineering team. Triage product or system issues and debug/track/resolve by analyzing the sources of issues and the impact on network, or service operations and quality. Research, create, and develop software applications to extend and improve on Equifax Solutions. Manage sole project priorities, deadlines, and deliverables. Collaborate on scalability issues involving access to data and information. Actively participate in Sprint planning, Sprint Retrospectives, and other team activity What experience you need? Bachelor's degree or equivalent experience 5+ years of software engineering experience 5+ years experience writing, debugging, and troubleshooting code in mainstream Java, SpringBoot, TypeScript/JavaScript, HTML, CSS 5+ years experience with Cloud technology: GCP, AWS, or Azure 5+ years experience designing and developing cloud-native solutions 5+ years experience designing and developing microservices using Java, SpringBoot, GCP SDKs, GKE/Kubernetes 5+ years experience deploying and releasing software using Jenkins CI/CD pipelines, understand infrastructure-as-code concepts, Helm Charts, and Terraform constructs What could set you apart? Knowledge or experience with Apache Beam for stream and batch data processing. Familiarity with big data tools and technologies like Apache Kafka, Hadoop, or Spark. Experience with containerization and orchestration tools (e.g., Docker, Kubernetes). Exposure to data visualization tools or platforms. Show more Show less

Posted 1 day ago

Apply

40.0 years

0 Lacs

Noida, Uttar Pradesh, India

On-site

Linkedin logo

Jubilant Bhartia Group Jubilant Bhartia Group is a global conglomerate founded by Mr. Shyam S Bhartia and Mr. Hari S Bhartia with strong presence in diverse sectors like Pharmaceuticals, Contract Research and Development Services, Proprietary Novel Drugs, Life Science Ingredients, Agri Products, Performance Polymers, Food Service (QSR), Food, Auto, Consulting in Aerospace and Oilfield Services. Jubilant Bhartia Group has four flagships Companies- Jubilant Pharmova Limited, Jubilant Ingrevia Limited, Jubilant FoodWorks Limited and Jubilant Industries Limited. Currently the group has a global workforce of around 43,000 employees. About Jubilant Ingrevia Limited Jubilant Ingrevia is a Global Integrated Life Science Products & Innovative Solutions provider serving, Pharmaceutical, Agrochemical, Nutrition, Consumer and Industrial customers with our customised products & solutions that are innovative, cost effective and conforming to premium quality standards. Ingrevia is born out of a union of “Ingre” denoting Ingredients & “vie” in French meaning Life (i.e. Ingredients for Life) Jubilant Ingrevia history goes back to 1978 with the incorporation of VAM Organics Limited, which later became Jubilant Organosys and then Jubilant Life Sciences and now demerged to an independent entity as Jubilant Ingrevia Limited, which is listed in both the stock exchanges of India. Over the years, company has developed global capacities and leadership in chosen business segments. We have more than 40 years of experience in Life Science Chemicals, 30+ years of experience in Pyridine Chemistry and value added Specialty Chemicals, and 20+ years of experience in Vitamin B3, B4 and other Nutraceutical products. We have strategically segmented our business into three Business Segments as explained below. We are rapidly growing the revenue in all the three segments. Speciality Chemicals Segment : We propose to launch a new platform of Diketene & its value-added derivatives, forward integrate our crop protection chemicals to value-added agrochemicals (Herbicides, Fungicides & Insecticides) by adding new facilities. We are an established ‘partner of choice’ in CDMO, with more Invest plans in GMP & Non-GMP multi-product facility for Pharma & Crop Protection customers. Nutrition & Health Solutions Segment : We propose to expand the existing capacity of Vitamin B3 to continue being one of the market leaders and introduce new branded animal as well as human nutrition and health premixes. Chemical Intermediates Segment : We propose to expand our existing acetic anhydride capacity and add value added anhydrides and aldehydes and enhance volumes in speciality ethanol. We have 5 world-class manufacturing facilities i.e. One in UP at Gajraula, Two in Gujarat at Bharuch and Baroda, Two in Maharashtra at Nira and Ambernath . We operate 61 Plants across these 5 sites giving is multi-plant and multi-location advantage. Find out more about us at www.jubilantingrevia.com The Position Organization- Jubilant Ingrevia Limited Designation - Data Scientist Location- Noida. Job Summary: - Plays a crucial role in helping businesses make informed decisions by leveraging data & will c ollaborate with stakeholders, design data models, create algorithms, and share meaningful insights to drive business success Key Responsibilities. Work with supply chain, manufacturing, Sales managers, customer account managers and quality function to produce algorithms. Gathering and interpreting data from various sources. Cleaning and verifying the accuracy of data sets to ensure data integrity. Developing and implementing data collection systems and strategies to optimize efficiency and accuracy. Applying statistical techniques to analyze and interpret complex data sets. Develop and implement statistical models for predictive analysis. Build and deploy machine learning models to solve business problems. Creating visual representations of data through charts, graphs, and dashboards to communicate findings effectively. Develop dashboards and reports for ongoing monitoring and analysis. Create, modify and improve complex manufacturing schedule. Create scenario planning model for manufacturing, develop manufacturing schedule adherence probability model. Regularly monitoring and evaluating data quality, making recommendations for improvements as necessary, ensuring compliance with data privacy and security regulations. Person Profile . Qualification - B.E/M.Sc Maths/Statistics. Experience - 2-5 Yrs. Desired Skills Desired Skills & Must Have - 2-5 years of relevant experience in chemical/ manufacturing industry. Hands on Generative AI. Exposure to Agentic AI Proficiency in data analysis tools such as Microsoft Excel, SQL, and statistical software (e.g., R or Python). Proficiency in programming languages such as Python or R. Expertise in statistical analysis, machine learning algorithms, and data manipulation. Strong analytical and problem-solving skills with the ability to handle complex data sets. Excellent attention to detail and a high level of accuracy in data analysis. Solid knowledge of data visualization techniques and experience using visualization tools like Tableau or Power BI. Strong communication skills to present findings and insights to non-technical stakeholders effectively Knowledge of statistical methodologies and techniques, including regression analysis, clustering, and hypothesis testing. Familiarity with data modeling and database management concepts. Experience in manipulating and cleansing large data sets. Ability to work collaboratively in a team environment and adapt to changing priorities. Experience with big data technologies (e.g., Hadoop, Spark). Knowledge of cloud platforms (e.g., AWS, Azure, Google Cloud). Familiarity with data engineering and database technologies. Jubilant is an equal opportunity employer. All qualified applicants will receive consideration for employment without regard to age, ancestry, citizenship, colour, gender identity or expression, genetic information, marital status, medical condition, national origin, political affiliation, race, ethnicity, religion or any other characteristic protected by applicable local laws, regulations and ordinances Show more Show less

Posted 1 day ago

Apply

8.0 years

0 Lacs

Noida, Uttar Pradesh, India

On-site

Linkedin logo

The E+D Growth Team's role is to help grow our user and customer base so we can fulfill Microsoft's mission of empowering every person and organization on the planet to achieve more. We do this through Product-Led Growth motions that we develop, facilitate, and partner with teams throughout Microsoft to deliver so we can bring more of Microsoft's software - across Microsoft 365, Windows, and elsewhere - to more users and convert those users into customers. We work with every segment of the market including consumers and businesses of all sizes, helping to facilitate improved engagement, retention, and acquisition for the wide array of products inside of the Experiences and Devices organization. Lead the next wave of growth for Microsoft's most transformative products. We are looking for an experienced, strategic, and customer-obsessed Principal Product Manager to drive Copilot and M365 subscription growth across the Microsoft ecosystem. As part of the E+D Growth team, you will help define and deliver our Product-Led Growth (PLG) strategy across Windows, Office, and beyond — crafting magical, AI-powered experiences that hundreds of millions of people rely on every day. Our team lives at the intersection of product innovation, experimentation, and business impact. We are builders, explorers, and connectors — and we are looking for a like-minded PM who thrives on driving big ideas from spark to scale. Microsoft’s mission is to empower every person and every organization on the planet to achieve more. As employees we come together with a growth mindset, innovate to empower others, and collaborate to realize our shared goals. Each day we build on our values of respect, integrity, and accountability to create a culture of inclusion where everyone can thrive at work and beyond. Responsibilities Define and lead the PLG strategy to drive deep adoption of Copilot and M365 experiences across Microsoft products. Champion customer-driven thinking and experimentation practices that unlock growth. Partner across disciplines (design, engineering, research, marketing, business) to deliver end-to-end experiences that delight users and move the business. Lead initiatives that bridge technical innovation with user value, delivering holistic improvements across multiple customer touchpoints. Use data, insights, and storytelling to align stakeholders, inspire teams, and make bold, high-quality decisions. Qualifications Required Qualifications: Bachelor's Degree AND 8+ years experience in product/service/project/program management or software development OR equivalent experience. Experience managing cross-functional and/or cross-team projects. Expertise in Product-Led Growth (PLG) methodologies: hypothesis-driven development, experimentation frameworks, data-informed decision-making. A strong track record of leading product strategies and shipping experiences that deliver measurable growth and customer impact at a global scale. Deep experience working in cross-functional environments and influencing outcomes across diverse teams and senior stakeholders. A learning mindset: fluent in using qualitative and quantitative insights to frame hypotheses, drive experiments, and iterate at speed. Executive communication skills: you know how to connect the dots between product investments, customer needs, and business outcomes. Passion for building not just great products, but also great team culture — where collaboration, inclusion, and continuous improvement are core. Other Requirements Ability to meet Microsoft, customer and/or government security screening requirements are required for this role. These requirements include but are not limited to the following specialized security screenings: Microsoft Cloud Background Check: This position will be required to pass the Microsoft Cloud background check upon hire/transfer and every two years thereafter. Preferred Qualifications Exceptional skills in influencing and aligning diverse stakeholders across engineering, design, marketing, research, and business disciplines. Ability to think strategically while diving deep into details — you can balance big-picture vision with day-to-day execution. Experience working with AI/ML-powered experiences, platform services, or large-scale subscription businesses is a plus. Passion for customer-centric innovation, operational excellence, and building inclusive, high-performance team cultures. #ExDGrowth #IDCMicrosoft #DPG Microsoft is an equal opportunity employer. Consistent with applicable law, all qualified applicants will receive consideration for employment without regard to age, ancestry, citizenship, color, family or medical care leave, gender identity or expression, genetic information, immigration status, marital status, medical condition, national origin, physical or mental disability, political affiliation, protected veteran or military status, race, ethnicity, religion, sex (including pregnancy), sexual orientation, or any other characteristic protected by applicable local laws, regulations and ordinances. If you need assistance and/or a reasonable accommodation due to a disability during the application process, read more about requesting accommodations. Show more Show less

Posted 1 day ago

Apply

2.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Linkedin logo

About Us As a Fortune 50 company with more than 400,000 team members worldwide, Target is an iconic brand and one of America's leading retailers. At Target, we have a timeless purpose and a proven strategy and that hasn’t happened by accident. Some of the best minds from diverse backgrounds come together at Target to redefine retail in an inclusive learning environment that values people and delivers world-class outcomes. That winning formula is especially apparent in Bengaluru, where Target in India operates as a fully integrated part of Target’s global team and has more than 4,000 team members supporting the company’s global strategy and operations. Joining Target means promoting a culture of mutual care and respect and striving to make the most meaningful and positive impact. Becoming a Target team member means joining a community that values diverse backgrounds. We believe your unique perspective is important, and you'll build relationships by being authentic and respectful. At Target, inclusion is part of the core value. We aim to create equitable experiences for all, regardless of their dimensions of difference. As an equal opportunity employer, Target provides diverse opportunities for everyone to grow and win Behind one of the world’s best loved brands is a uniquely capable and brilliant team of data scientists, engineers and analysts. The Target Data & Analytics team creates the tools and data products to sustainably educate and enable our business partners to make great data-based decisions at Target. We help develop the technology that personalizes the guest experience, from product recommendations to relevant ad content. We’re also the source of the data and analytics behind Target’s Internet of Things (iOT) applications, fraud detection, Supply Chain optimization and demand forecasting. We play a key role in identifying the test-and-measure or A/B test opportunities that continuously help Target improve the guest experience, whether they love to shop in stores or at Target.com. About This Career Role is about being passionate about data, analysis, metrics development, feature experimentation and its application to improve both business strategies, as well as support to GSCL operations team Develop, model and apply analytical best practices while upskilling and coaching others on new and emerging technologies to raise the bar for performance in analysis by sharing with others (clients, peers, etc.) well documented analytical solutions . Drive a continuous improvement mindset by seeking out new ways to solve problems through formal trainings, peer interactions and industry publications to continually improve technically, implement best practices and analytical acumen Be expert in specific business domain, self-directed and drive execution towards outcomes, understand business inter-dependencies, conduct detailed problem solving, remediate obstacles, use independent judgement and decision making to deliver as per product scope, provide inputs to establish product/ project timelines Participate in learning forums, or be a buddy to help increase awareness and adoption of current technical topics relevant for analytics competency e.g. Tools (R, Python); exploratory & descriptive techniques ( basic statistics and modelling) Champion participation in internal meetups, hackathons; presents in internal conferences, relevant to analytics competency Contribute the evaluation and design of relevant technical guides and tools to hire great talent by partnering with talent acquisition Participate in Agile ceremonies to keep the team up-to-date on task progress, as needed Develop and analyse data reports/Dashboards/pipelines, do RCA and troubleshooting of issues that arise using exploratory and systemic techniques About You B.E/B.Tech (2-3 years of relevant exp), M.Tech, M.Sc. , MCA (+2 years of relevant exp) Candidates with strong domain knowledge and relevant experience in Supply Chain / Retails analytics would be highly preferred Strong data understanding inference of patterns, root cause, statistical analysis, understanding forecasting/predictive modelling, , etc. Advanced SQL experience writing complex queries Hands on experience with analytics tools: Hadoop, Hive, Spark, Python, R, Domo and/or equivalent technologies Experience working with Product teams and business leaders to develop product roadmaps and feature development Able to support conclusions with analytical evidence using descriptive stats, inferential stats and data visualizations Strong analytical, problem solving, and conceptual skills. Demonstrated ability to work with ambiguous problem definitions, recognize dependencies and deliver impact solutions through logical problem solving and technical ideations Excellent communication skills with the ability to speak to both business and technical teams, and translate ideas between them Intellectually curious, high energy and a strong work ethic Comfort with ambiguity and open-ended problems in support of supply chain operations Useful Links- Life at Target- https://india.target.com/ Benefits- https://india.target.com/life-at-target/workplace/benefits Show more Show less

Posted 1 day ago

Apply

3.0 - 5.0 years

5 - 7 Lacs

Mumbai

Work from Office

Naukri logo

As a Data Engineer at IBM, you'll play a vital role in the development, design of application, provide regular support/guidance to project teams on complex coding, issue resolution and execution. Your primary responsibilities include: Lead the design and construction of new solutions using the latest technologies, always looking to add business value and meet user requirements. Strive for continuous improvements by testing the build solution and working under an agile framework. Discover and implement the latest technologies trends to maximize and build creative solutions Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Experience with Apache Spark (PySpark): In-depth knowledge of Sparks architecture, core APIs, and PySpark for distributed data processing. Big Data Technologies: Familiarity with Hadoop, HDFS, Kafka, and other big data tools. Data Engineering Skills: Strong understanding of ETL pipelines, data modelling, and data warehousing concepts. Strong proficiency in Python: Expertise in Python programming with a focus on data processing and manipulation. Data Processing Frameworks: Knowledge of data processing libraries such as Pandas, NumPy. SQL Proficiency: Experience writing optimized SQL queries for large-scale data analysis and transformation. Cloud Platforms: Experience working with cloud platforms like AWS, Azure, or GCP, including using cloud storage systems Preferred technical and professional experience Define, drive, and implement an architecture strategy and standards for end-to-end monitoring. Partner with the rest of the technology teams including application development, enterprise architecture, testing services, network engineering, Good to have detection and prevention tools for Company products and Platform and customer-facing

Posted 1 day ago

Apply

3.0 - 5.0 years

5 - 7 Lacs

Bengaluru

Work from Office

Naukri logo

Implementing and validating predictive models as well as creating and maintaining statistical models with a focus on big data, incorporating a variety of statistical and machine learning techniques. Designing and implementing various enterprise search applications such as Elasticsearch and Splunk for client requirements. Working in an Agile, collaborative environment, partnering with other scientists, engineers, consultants, and database administrators of all backgrounds and disciplines to bring analytical rigor and statistical methods to the challenges of predicting behaviors. Building teams or writing programs to cleanse and integrate data in an efficient and reusable manner, developing predictive or prescriptive models, and evaluating modeling results Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise BE / B Tech in any stream, M.Sc. (Computer Science/IT) / M.C.A, with Minimum 3-5 plus years of experience Experience in Front-End Development specializing in React.js and TypeScript , Javascript, MySql Ability to use Extract, Transform, and Load (ETL) tools and/or data integration, or federation tools to prepare and transform data as needed. Ability to use leading edge tools such as Linux, SQL, Python, Spark, Hadoop, and Java. Proficient understanding of code version tools such as GitHub, Gitlab, Bitbucket Preferred technical and professional experience You thrive on teamwork and have excellent verbal and written communication skills. Ability to communicate with internal and external clients to understand and define business needs, providing analytical solutions. Ability to communicate results to technical and non-technical audiences

Posted 1 day ago

Apply

3.0 - 5.0 years

5 - 7 Lacs

Kochi

Work from Office

Naukri logo

Create Solution Outline and Macro Design to describe end to end product implementation in Data Platforms including, System integration, Data ingestion, Data processing, Serving layer, Design Patterns, Platform Architecture Principles for Data platform. Contribute to pre-sales, sales support through RfP responses, Solution Architecture, Planning and Estimation. Contribute to reusable components / asset / accelerator development to support capability development Participate in Customer presentations as Platform Architects / Subject Matter Experts on Big Data, Azure Cloud and related technologies Participate in customer PoCs to deliver the outcomes Participate in delivery reviews / product reviews, quality assurance and work as design authority Required education Bachelor's Degree Preferred education Non-Degree Program Required technical and professional expertise Experience in designing of data products providing descriptive, prescriptive, and predictive analytics to end users or other systems Experience in data engineering and architecting data platforms. Experience in architecting and implementing Data Platforms Azure Cloud Platform Experience on Azure cloud is mandatory (ADLS Gen 1 / Gen2, Data Factory, Databricks, Synapse Analytics, Azure SQL, Cosmos DB, Event hub, Snowflake), Azure Purview, Microsoft Fabric, Kubernetes, Terraform, Airflow Experience in Big Data stack (Hadoop ecosystem Hive, HBase, Kafka, Spark, Scala PySpark, Python etc.) with Cloudera or Hortonworks Preferred technical and professional experience Experience in architecting complex data platforms on Azure Cloud Platform and On-Prem Experience and exposure to implementation of Data Fabric and Data Mesh concepts and solutions like Microsoft Fabric or Starburst or Denodo or IBM Data Virtualisation or Talend or Tibco Data Fabric Exposure to Data Cataloging and Governance solutions like Collibra, Alation, Watson Knowledge Catalog, dataBricks unity Catalog, Apache Atlas, Snowflake Data Glossary etc

Posted 1 day ago

Apply

3.0 - 5.0 years

5 - 7 Lacs

Pune

Work from Office

Naukri logo

As a Big Data Engineer, you will develop, maintain, evaluate, and test big data solutions. You will be involved in data engineering activities like creating pipelines/workflows for Source to Target and implementing solutions that tackle the clients needs. Your primary responsibilities include: Design, build, optimize and support new and existing data models and ETL processes based on our clients business requirements. Build, deploy and manage data infrastructure that can adequately handle the needs of a rapidly growing data driven organization. Coordinate data access and security to enable data scientists and analysts to easily access to data whenever they need too. Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Design, develop, and maintain Ab Initio graphs for extracting, transforming, and loading (ETL) data from diverse sources to various target systems. Implement data quality and validation processes within Ab Initio.. Data Modeling and Analysis:. Collaborate with data architects and business analysts to understand data requirements and translate them into effective ETL processes.. Analyze and model data to ensure optimal ETL design and performance.. Ab Initio Components:. . Utilize Ab Initio components such as Transform Functions, Rollup, Join, Normalize, and others to build scalable and efficient data integration solutions.. Implement best practices for reusable Ab Initio component Preferred technical and professional experience Optimize Ab Initio graphs for performance, ensuring efficient data processing and minimal resource utilization.. Conduct performance tuning and troubleshooting as needed.. Collaboration. Work closely with cross-functional teams, including data analysts, database administrators, and quality assurance, to ensure seamless integration of ETL processes.. Participate in design reviews and provide technical expertise to enhance overall solution quality Documentation

Posted 1 day ago

Apply

3.0 - 5.0 years

5 - 7 Lacs

Pune

Work from Office

Naukri logo

As a Big Data Engineer, you will develop, maintain, evaluate, and test big data solutions. You will be involved in data engineering activities like creating pipelines/workflows for Source to Target and implementing solutions that tackle the clients needs Your primary responsibilities include: Design, build, optimize and support new and existing data models and ETL processes based on our clients business requirements. Build, deploy and manage data infrastructure that can adequately handle the needs of a rapidly growing data driven organization. Coordinate data access and security to enable data scientists and analysts to easily access to data whenever they need too Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Design, develop, and maintain Ab Initio graphs for extracting, transforming, and loading (ETL) data from diverse sources to various target systems. Implement data quality and validation processes within Ab Initio. Data Modelling and Analysis. Collaborate with data architects and business analysts to understand data requirements and translate them into effective ETL processes. Analyse and model data to ensure optimal ETL design and performance. Ab Initio Components. . Utilize Ab Initio components such as Transform Functions, Rollup, Join, Normalize, and others to build scalable and efficient data integration solutions. Implement best practices for reusable Ab Initio components Preferred technical and professional experience Optimize Ab Initio graphs for performance, ensuring efficient data processing and minimal resource utilization. Conduct performance tuning and troubleshooting as needed. Collaboration. . Work closely with cross-functional teams, including data analysts, database administrators, and quality assurance, to ensure seamless integration of ETL processes. Participate in design reviews and provide technical expertise to enhance overall solution quality Documentation

Posted 1 day ago

Apply

3.0 - 5.0 years

5 - 7 Lacs

Pune

Work from Office

Naukri logo

Developer leads the cloud application development/deployment. A developer responsibility is to lead the execution of a project by working with a level resource on assigned development/deployment activities and design, build, and maintain cloud environments focusing on uptime, access, control, and network security using automation and configuration management tools Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Strong proficiency in Java, Spring Framework, Spring boot, RESTful APIs, excellent understanding of OOP, Design Patterns. Strong knowledge of ORM tools like Hibernate or JPA, Java based Micro-services framework, Hands on experience on Spring boot Microservices Strong knowledge of micro-service logging, monitoring, debugging and testing, In-depth knowledge of relational databases (e.g., MySQL) Experience in container platforms such as Docker and Kubernetes, experience in messaging platforms such as Kafka or IBM MQ, Good understanding of Test-Driven-Development Familiar with Ant, Maven or other build automation framework, good knowledge of base UNIX commands Preferred technical and professional experience Experience in Concurrent design and multi-threading Primary Skills: - Core Java, Spring Boot, Java2/EE, Microservices - Hadoop Ecosystem (HBase, Hive, MapReduce, HDFS, Pig, Sqoop etc) - Spark Good to have Python

Posted 1 day ago

Apply

3.0 - 5.0 years

5 - 7 Lacs

Kochi

Work from Office

Naukri logo

As an Associate Software Developer at IBM you will harness the power of data to unveil captivating stories and intricate patterns. You'll contribute to data gathering, storage, and both batch and real-time processing. Collaborating closely with diverse teams, you'll play an important role in deciding the most suitable data management systems and identifying the crucial data required for insightful analysis. As a Data Engineer, you'll tackle obstacles related to database integration and untangle complex, unstructured data sets. In this role, your responsibilities may include: Implementing and validating predictive models as well as creating and maintain statistical models with a focus on big data, incorporating a variety of statistical and machine learning techniques Designing and implementing various enterprise search applications such as Elasticsearch and Splunk for client requirements Work in an Agile, collaborative environment, partnering with other scientists, engineers, consultants and database administrators of all backgrounds and disciplines to bring analytical rigor and statistical methods to the challenges of predicting behaviours. Build teams or writing programs to cleanse and integrate data in an efficient and reusable manner, developing predictive or prescriptive models, and evaluating modelling results Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Experience in Big Data Technology like Hadoop, Apache Spark, Hive. Practical experience in Core Java (1.8 preferred) /Python/Scala. Having experience in AWS cloud services including S3, Redshift, EMR etc. Strong expertise in RDBMS and SQL. Good experience in Linux and shell scripting. Experience in Data Pipeline using Apache Airflow Preferred technical and professional experience You thrive on teamwork and have excellent verbal and written communication skills. Ability to communicate with internal and external clients to understand and define business needs, providing analytical solutions Ability to communicate results to technical and non-technical audiences

Posted 1 day ago

Apply

3.0 - 5.0 years

5 - 7 Lacs

Gurugram

Work from Office

Naukri logo

The ability to be a team player The ability and skill to train other people in procedural and technical topics Strong communication and collaboration skills Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Able to write complex SQL queries; Having experience in Azure Databricks Preferred technical and professional experience Excellent communication and stakeholder management skills

Posted 1 day ago

Apply

4.0 - 6.0 years

6 - 8 Lacs

Mumbai

Work from Office

Naukri logo

As a Data Engineer at IBM, you'll play a vital role in the development, design of application, provide regular support/guidance to project teams on complex coding, issue resolution and execution. Your primary responsibilities include: Lead the design and construction of new solutions using the latest technologies, always looking to add business value and meet user requirements. Strive for continuous improvements by testing the build solution and working under an agile framework. Discover and implement the latest technologies trends to maximize and build creative solutions Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Experience with Apache Spark (PySpark): In-depth knowledge of Sparks architecture, core APIs, and PySpark for distributed data processing. Big Data Technologies: Familiarity with Hadoop, HDFS, Kafka, and other big data tools. Data Engineering Skills: Strong understanding of ETL pipelines, data modeling, and data warehousing concepts. Strong proficiency in Python: Expertise in Python programming with a focus on data processing and manipulation. Data Processing Frameworks: Knowledge of data processing libraries such as Pandas, NumPy. SQL Proficiency: Experience writing optimized SQL queries for large-scale data analysis and transformation. Cloud Platforms: Experience working with cloud platforms like AWS, Azure, or GCP, including using cloud storage systems Preferred technical and professional experience Define, drive, and implement an architecture strategy and standards for end-to-end monitoring. Partner with the rest of the technology teams including application development, enterprise architecture, testing services, network engineering, Good to have detection and prevention tools for Company products and Platform and customer-facing

Posted 1 day ago

Apply

3.0 - 5.0 years

5 - 7 Lacs

Bengaluru

Work from Office

Naukri logo

As an Data Engineer at IBM you will harness the power of data to unveil captivating stories and intricate patterns. You'll contribute to data gathering, storage, and both batch and real-time processing. Collaborating closely with diverse teams, you'll play an important role in deciding the most suitable data management systems and identifying the crucial data required for insightful analysis. As a Data Engineer, you'll tackle obstacles related to database integration and untangle complex, unstructured data sets. In this role, your responsibilities may include: Implementing and validating predictive models as well as creating and maintain statistical models with a focus on big data, incorporating a variety of statistical and machine learning techniques Designing and implementing various enterprise search applications such as Elasticsearch and Splunk for client requirements Work in an Agile, collaborative environment, partnering with other scientists, engineers, consultants and database administrators of all backgrounds and disciplines to bring analytical rigor and statistical methods to the challenges of predicting behaviours. Build teams or writing programs to cleanse and integrate data in an efficient and reusable manner, developing predictive or prescriptive models, and evaluating modelling results Required education Bachelor's Degree Preferred education High School Diploma/GED Required technical and professional expertise Good Hands on experience in DBT is required. ETL Datastage and snowflake - preferred. Ability to use programming languages like Java, Python, Scala, etc., to build pipelines to extract and transform data from a repository to a data consumer Ability to use Extract, Transform, and Load (ETL) tools and/or data integration, or federation tools to prepare and transform data as needed. Ability to use leading edge tools such as Linux, SQL, Python, Spark, Hadoop and Java Preferred technical and professional experience You thrive on teamwork and have excellent verbal and written communication skills. Ability to communicate with internal and external clients to understand and define business needs, providing analytical solutions Ability to communicate results to technical and non-technical audiences

Posted 1 day ago

Apply

3.0 - 5.0 years

5 - 7 Lacs

Pune

Work from Office

Naukri logo

As Data Engineer at IBM you will harness the power of data to unveil captivating stories and intricate patterns. Youll contribute to data gathering, storage, and both batch and real-time processing. Collaborating closely with diverse teams, youll play an important role in deciding the most suitable data management systems and identifying the crucial data required for insightful analysis. As a Data Engineer, youll tackle obstacles related to database integration and untangle complex, unstructured data sets. In this role, your responsibilities may include: Implementing and validating predictive models as well as creating and maintain statistical models with a focus on big data, incorporating a variety of statistical and machine learning techniques Designing and implementing various enterprise search applications such as Elasticsearch and Splunk for client requirements Work in an Agile, collaborative environment, partnering with other scientists, engineers, consultants and database administrators of all backgrounds and disciplines to bring analytical rigor and statistical methods to the challenges of predicting behaviours. Build teams or writing programs to cleanse and integrate data in an efficient and reusable manner, developing predictive or prescriptive models, and evaluating modelling results Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Expertise in designing and implementing scalable data warehouse solutions on Snowflake, including schema design, performance tuning, and query optimization. Strong experience in building data ingestion and transformation pipelines using Talend to process structured and unstructured data from various sources. Proficiency in integrating data from cloud platforms into Snowflake using Talend and native Snowflake capabilities. Hands-on experience with dimensional and relational data modelling techniques to support analytics and reporting requirements Preferred technical and professional experience Understanding of optimizing Snowflake workloads, including clustering keys, caching strategies, and query profiling. Ability to implement robust data validation, cleansing, and governance frameworks within ETL processes. Proficiency in SQL and/or Shell scripting for custom transformations and automation tasks

Posted 1 day ago

Apply

89.0 years

0 Lacs

Pune/Pimpri-Chinchwad Area

On-site

Linkedin logo

Company Description GFK - Growth from Knowledge. For over 89 years, we have earned the trust of our clients around the world by solving critical questions in their decision-making process. We fuel their growth by providing a complete understanding of their consumers’ buying behavior, and the dynamics impacting their markets, brands and media trends. In 2023, GfK combined with NIQ, bringing together two industry leaders with unparalleled global reach. With a holistic retail read and the most comprehensive consumer insights - delivered with advanced analytics through state-of-the-art platforms - GfK drives “Growth from Knowledge”. Job Description It's an exciting time to be a builder. Constant technological advances are creating an exciting new world for those who understand the value of data. The mission of NIQ’s Media Division is to turn NIQ into the global leader that transforms how consumer brands plan, activate and measure their media activities. Recombine is the delivery area focused on maximising the value of data assets in our NIQ Media Division. We apply advanced statistical and machine learning techniques to unlock deeper insights, whilst integrating data from multiple internal and external sources. Our teams develop data integration products across various markets and product areas, delivering enriched datasets that power client decision-making. Role Overview We are looking for a Principal Software Engineer for our Recombine delivery area to provide technical leadership within our development teams, ensuring best practices, architectural coherence, and effective collaboration across projects. This role is ideal for a highly experienced engineer who can bridge the gap between data engineering, data science, and software engineering, helping teams build scalable, maintainable, and well-structured data solutions. As a Principal Software Engineer, you will play a hands-on role in designing and implementing solutions while mentoring developers, influencing technical direction, and driving best practices in software and data engineering. This role includes line management responsibilities, ensuring the growth and development of team members. The role will be working within an AWS environment, leveraging the power of cloud-native technologies and modern data platforms Key Responsibilities Technical Leadership & Architecture Act as a technical architect, ensuring alignment between the work of multiple development teams in data engineering and data science. Design scalable, high-performance data processing solutions within AWS, considering factors such as governance, security, and maintainability. Drive the adoption of best practices in software development, including CI/CD, testing strategies, and cloud-native architecture. Work closely with Product Owners to translate business needs into technical solutions. Hands-on Development & Technical Excellence Lead by example through high-quality coding, code reviews, and proof-of-concept development. Solve complex engineering problems and contribute to critical design decisions. Ensure effective use of AWS services, including AWS Glue, AWS Lambda, Amazon S3, Redshift, and EMR. Develop and optimise data pipelines, data transformations, and ML workflows in a cloud environment. Line Management & Team Development Provide line management to engineers, ensuring their professional growth and development. Conduct performance reviews, set development goals, and mentor team members to enhance their skills. Foster a collaborative and high-performing engineering culture, promoting knowledge sharing and continuous improvement beyond team boundaries. Support hiring, onboarding, and career development initiatives within the engineering team. Collaboration & Cross-Team Coordination Act as the technical glue between data engineers, data scientists, and software developers, ensuring smooth integration of different components. Provide mentorship and guidance to developers, helping them level up their skills and technical understanding. Work with DevOps teams to improve deployment pipelines, observability, and infrastructure as code. Engage with stakeholders across the business, translating technical concepts into business-relevant insights. Governance, Security & Data Best Practices Champion data governance, lineage, and security across the platform. Advocate for and implement scalable data architecture patterns, such as Data Mesh, Lakehouse, or event-driven pipelines. Ensure compliance with industry standards, internal policies, and regulatory requirements. Qualifications Requirements & Experience Strong software engineering background with experience in designing and building production-grade applications in Python, Scala, Java, or similar languages. Proven experience with AWS-based data platforms, specifically AWS Glue, Redshift, Athena, S3, Lambda, and EMR. Expertise in Apache Spark and AWS Lake Formation, with experience building large-scale distributed data pipelines. Experience with workflow orchestration tools like Apache Airflow or AWS Step Functions. Cloud experience in AWS, including containerisation (Docker, Kubernetes, ECS, EKS) and infrastructure as code (Terraform, CloudFormation). Strong knowledge of modern software architecture, including microservices, event-driven systems, and distributed computing. Experience leading teams in an agile environment, with a strong understanding of CI/CD pipelines, automated testing, and DevOps practices. Excellent problem-solving and communication skills, with the ability to engage with both technical and non-technical stakeholders. Proven line management experience, including mentoring, career development, and performance management of engineering teams. Additional Information Our Benefits Flexible working environment Volunteer time off LinkedIn Learning Employee-Assistance-Program (EAP) About NIQ NIQ is the world’s leading consumer intelligence company, delivering the most complete understanding of consumer buying behavior and revealing new pathways to growth. In 2023, NIQ combined with GfK, bringing together the two industry leaders with unparalleled global reach. With a holistic retail read and the most comprehensive consumer insights—delivered with advanced analytics through state-of-the-art platforms—NIQ delivers the Full View™. NIQ is an Advent International portfolio company with operations in 100+ markets, covering more than 90% of the world’s population. For more information, visit NIQ.com Want to keep up with our latest updates? Follow us on: LinkedIn | Instagram | Twitter | Facebook Our commitment to Diversity, Equity, and Inclusion NIQ is committed to reflecting the diversity of the clients, communities, and markets we measure within our own workforce. We exist to count everyone and are on a mission to systematically embed inclusion and diversity into all aspects of our workforce, measurement, and products. We enthusiastically invite candidates who share that mission to join us. We are proud to be an Equal Opportunity/Affirmative Action-Employer, making decisions without regard to race, color, religion, gender, gender identity or expression, sexual orientation, national origin, genetics, disability status, age, marital status, protected veteran status or any other protected class. Our global non-discrimination policy covers these protected classes in every market in which we do business worldwide. Learn more about how we are driving diversity and inclusion in everything we do by visiting the NIQ News Center: https://nielseniq.com/global/en/news-center/diversity-inclusion Show more Show less

Posted 1 day ago

Apply

Exploring Spark Jobs in India

The demand for professionals with expertise in Spark is on the rise in India. Spark, an open-source distributed computing system, is widely used for big data processing and analytics. Job seekers in India looking to explore opportunities in Spark can find a variety of roles in different industries.

Top Hiring Locations in India

  1. Bangalore
  2. Pune
  3. Hyderabad
  4. Chennai
  5. Mumbai

These cities have a high concentration of tech companies and startups actively hiring for Spark roles.

Average Salary Range

The average salary range for Spark professionals in India varies based on experience level: - Entry-level: INR 4-6 lakhs per annum - Mid-level: INR 8-12 lakhs per annum - Experienced: INR 15-25 lakhs per annum

Salaries may vary based on the company, location, and specific job requirements.

Career Path

In the field of Spark, a typical career progression may look like: - Junior Developer - Senior Developer - Tech Lead - Architect

Advancing in this career path often requires gaining experience, acquiring additional skills, and taking on more responsibilities.

Related Skills

Apart from proficiency in Spark, professionals in this field are often expected to have knowledge or experience in: - Hadoop - Java or Scala programming - Data processing and analytics - SQL databases

Having a combination of these skills can make a candidate more competitive in the job market.

Interview Questions

  • What is Apache Spark and how is it different from Hadoop? (basic)
  • Explain the difference between RDD, DataFrame, and Dataset in Spark. (medium)
  • How does Spark handle fault tolerance? (medium)
  • What is lazy evaluation in Spark? (basic)
  • Explain the concept of transformations and actions in Spark. (basic)
  • What are the different deployment modes in Spark? (medium)
  • How can you optimize the performance of a Spark job? (advanced)
  • What is the role of a Spark executor? (medium)
  • How does Spark handle memory management? (medium)
  • Explain the Spark shuffle operation. (medium)
  • What are the different types of joins in Spark? (medium)
  • How can you debug a Spark application? (medium)
  • Explain the concept of checkpointing in Spark. (medium)
  • What is lineage in Spark? (basic)
  • How can you monitor and manage a Spark application? (medium)
  • What is the significance of the Spark Driver in a Spark application? (medium)
  • How does Spark SQL differ from traditional SQL? (medium)
  • Explain the concept of broadcast variables in Spark. (medium)
  • What is the purpose of the SparkContext in Spark? (basic)
  • How does Spark handle data partitioning? (medium)
  • Explain the concept of window functions in Spark SQL. (advanced)
  • How can you handle skewed data in Spark? (advanced)
  • What is the use of accumulators in Spark? (advanced)
  • How can you schedule Spark jobs using Apache Oozie? (advanced)
  • Explain the process of Spark job submission and execution. (basic)

Closing Remark

As you explore opportunities in Spark jobs in India, remember to prepare thoroughly for interviews and showcase your expertise confidently. With the right skills and knowledge, you can excel in this growing field and advance your career in the tech industry. Good luck with your job search!

cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies