Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
3.0 - 6.0 years
5 - 10 Lacs
Pune
Work from Office
Exp. in working on complex and medium to large projects. candidate will have expertise in data warehousing concepts ETL & data integration. Min of 3-6 years of exp. in AbInitio Ability to work independently. Good exp. in Hadoop Able to work on UNIX
Posted 3 weeks ago
12.0 - 14.0 years
16 - 18 Lacs
Mumbai
Work from Office
Associate Director, Data Engineering (J2EE/Angular/React Full Stack Individual Contributor) About the Role: Grade Level (for internal use): 12 The Team You will be an expert contributor and part of the Rating Organizations Data Services Product Engineering Team. This team, who has a broad and expert knowledge on Ratings organizations critical data domains, technology stacks and architectural patterns, fosters knowledge sharing and collaboration that results in a unified strategy. All Data Services team members provide leadership, innovation, timely delivery, and the ability to articulate business value. Be a part of a unique opportunity to build and evolve S&P Ratings next gen analytics platform. Responsibilities: Architect, design, and implement innovative software solutions to enhance S&P Ratings cloud-based analytics platform. Mentor a team of engineers (as required), fostering a culture of trust, continuous growth, and collaborative problem-solving. Collaborate with business partners to understand requirements, ensuring technical solutions align with business goals. Manage and improve existing software solutions, ensuring high performance and scalability. Participate actively in all Agile scrum ceremonies, contributing to the continuous improvement of team processes. Produce comprehensive technical design documents and conduct technical walkthroughs. Experience & Qualifications: Bachelors degree in computer science, Information Systems, Engineering, equivalent or more is required Proficient with software development lifecycle (SDLC) methodologies like Agile, Test-driven development Total 12+ years of experience with 8+ years designing enterprise products, modern data stacks and analytics platforms 6+ years of hands-on experience contributing to application architecture & designs, proven software/enterprise integration design patterns and full-stack knowledge including modern distributed front end and back-end technology stacks 5+ years full stack development experience in modern web development technologies, Java/J2EE, UI frameworks like Angular, React, SQL, Oracle, NoSQL Databases like MongoDB Exp. with Delta Lake systems like Databricks using AWS cloud technologies and PySpark is a plus Experience designing transactional/data warehouse/data lake and data integrations with Big data eco system leveraging AWS cloud technologies Thorough understanding of distributed computing Passionate, smart, and articulate developer Quality first mindset with a strong background and experience with developing products for a global audience at scale Excellent analytical thinking, interpersonal, oral and written communication skills with strong ability to influence both IT and business partners Superior knowledge of system architecture, object-oriented design, and design patterns. Good work ethic, self-starter, and results-oriented Excellent communication skills are essential, with strong verbal and writing proficiencies Additional Preferred Qualifications: Experience working AWS Experience with SAFe Agile Framework Bachelors/PG degree in Computer Science, Information Systems or equivalent. Hands-on experience contributing to application architecture & designs, proven software/enterprise integration design principles Ability to prioritize and manage work to critical project timelines in a fast-paced environment Excellent Analytical and communication skills are essential, with strong verbal and writing proficiencies Ability to train and mentor
Posted 3 weeks ago
10.0 - 14.0 years
12 - 16 Lacs
Mumbai, Maharastra
Work from Office
About the Role: Grade Level (for internal use): 11 The Team You will be an expert contributor and part of the Rating Organizations Data Services Product Engineering Team. This team, who has a broad and expert knowledge on Ratings organizations critical data domains, technology stacks and architectural patterns, fosters knowledge sharing and collaboration that results in a unified strategy. All Data Services team members provide leadership, innovation, timely delivery, and the ability to articulate business value. Be a part of a unique opportunity to build and evolve S&P Ratings next gen analytics platform. Responsibilities: Architect, design, and implement innovative software solutions to enhance S&P Ratings cloud-based analytics platform. Mentor a team of engineers (as required), fostering a culture of trust, continuous growth, and collaborative problem-solving. Collaborate with business partners to understand requirements, ensuring technical solutions align with business goals. Manage and improve existing software solutions, ensuring high performance and scalability. Participate actively in all Agile scrum ceremonies, contributing to the continuous improvement of team processes. Produce comprehensive technical design documents and conduct technical walkthroughs. Experience & Qualifications: Bachelors degree in computer science, Information Systems, Engineering, equivalent or more is required Proficient with software development lifecycle (SDLC) methodologies like Agile, Test-driven development 10+ years of experience with 4+ years designing/developing enterprise products, modern tech stacks and data platforms 4+ years of hands-on experience contributing to application architecture & designs, proven software/enterprise integration design patterns and full-stack knowledge including modern distributed front end and back-end technology stacks 5+ years full stack development experience in modern web development technologies, Java/J2EE, UI frameworks like Angular, React, SQL, Oracle, NoSQL Databases like MongoDB Experience designing transactional/data warehouse/data lake and data integrations with Big data eco system leveraging AWS cloud technologies Thorough understanding of distributed computing Passionate, smart, and articulate developer Quality first mindset with a strong background and experience with developing products for a global audience at scale Excellent analytical thinking, interpersonal, oral and written communication skills with strong ability to influence both IT and business partners Superior knowledge of system architecture, object-oriented design, and design patterns. Good work ethic, self-starter, and results-oriented Excellent communication skills are essential, with strong verbal and writing proficiencies Exp. with Delta Lake systems like Databricks using AWS cloud technologies and PySpark is a plus Additional Preferred Qualifications: Experience working AWS Experience with SAFe Agile Framework Bachelors/PG degree in Computer Science, Information Systems or equivalent. Hands-on experience contributing to application architecture & designs, proven software/enterprise integration design principles Ability to prioritize and manage work to critical project timelines in a fast-paced environment Excellent Analytical and communication skills are essential, with strong verbal and writing proficiencies Ability to train and mentor
Posted 3 weeks ago
1.0 - 4.0 years
3 - 6 Lacs
Pune
Hybrid
Must have skills required : Python, R, SQL, PowerBI, Spotfire, Hadoop, Hive Good to have skills : Spark, Statistics, Big Data Job Description: You will work with Being part of a digital delivery data group supporting bp Solutions, you will apply your domain knowledge and familiarity with domain data processes to support the organisation. Part of bps Production & Operations business, bp Solutions has hubs in London, Pune, and Houston. The data team provides daily operational data management, data engineering and analytics support to this organisation across a broad range of activity from facilities and subsea engineering to logistics. Let me tell you about the role A data analyst collects, processes, and performs analyses on a variety of datasets. Their key responsibilities include interpreting complex data sets to identify trends and patterns, using analytical tools and methods to generate actionable insights, and creating visualizations and reports to communicate those insights and recommendations to support decision-making. Data analysts collaborate closely with business domain stakeholders to understand their data analysis needs, ensure data accuracy, write and recommend data-driven solutions and solve value impacting business problems. You might be a good fit for this role if you: have strong domain knowledge in at least one of; facilities or subsea engineering, maintenance and reliability, operations, logistics. Strong analytical skills and demonstrable capability in applying analytical techniques and Python scripting to solve practical problems. are curious, and keen to apply new technologies, trends & methods to improve existing standards and the capabilities of the Subsurface community. are well organized and self-motivated, you balance proactive and reactive approaches and across multiple priorities to complete tasks on time. apply judgment and common sense you use insight and good judgment to inform actions and respond to situations as they arise. What you will deliver Be a bridge between asset teams and Technology, combining in-depth understanding of one or more relevant domains with data & analytics skills Provide actionable, data-driven insights by combining deep statistical skills, data manipulation capabilities and business insight. Proactively identify impactful opportunities and autonomously complete data analysis. You apply existing data & analytics strategies relevant to your immediate scope. Clean, pre-process and analyse both structured and unstructured data Develop data visualisations to analyse and interrogate broad datasets (e.g. with tools such as Microsoft PowerBI, Spotfire or similar). Present results to peers and senior management, influencing decision making What you will need to be successful (experience and qualifications) Essential MSc or equivalent experience in a quantitative field, preferably statistics. have strong domain knowledge in at least one of; facilities or subsea engineering, maintenance and reliability, operations, logistics. Hands-on experience carrying out data analytics, data mining and product analytics in complex, fast-paced environments. Applied knowledge of data analytics and data pipelining tools and approaches across all data lifecycle stages. Deep understanding of a few and a high-level understanding of several commonly available statistics approaches. Advanced SQL knowledge. Advanced scripting experience in R or python. Ability to write and maintain moderately complex data pipelines. Customer-centric and pragmatic mindset. Focus on value delivery and swift execution, while maintaining attention to detail. Excellent communication and interpersonal skills, with the ability to effectively communicate ideas, expectations, and feedback to team members, stakeholders, and customers. Foster collaboration and teamwork Desired Advanced analytics degree. Experience applying analytics to support engineering turnarounds Experience with big data technologies (e.g. Hadoop, Hive, and Spark) is a plus.
Posted 3 weeks ago
6.0 - 10.0 years
8 - 12 Lacs
Indore, Hyderabad, Ahmedabad
Work from Office
Role Overview: We are looking for a skilled .NET Backend Developer with Azure Data Engineering expertise to join our dynamic and growing team. This role demands strong hands-on experience in .NET technologies along with cloud-based data engineering platforms like Azure Databricks or Snowflake. Primary Technical Skills (Must-Have): .NET Core / ASP.NET Core / C# Strong backend development Web API & Microservices Architecture SQL Server, NoSQL, Entity Framework (EF 6+) Azure Cloud Platform, Azure Data Engineering Azure Databricks, Microsoft Fabric, or Snowflake Database Performance Tuning & Optimization Strong understanding of OOPs & Design Patterns Agile Methodology Experience Nice to Have (Secondary Skills): Angular / JavaScript Frameworks MongoDB NPM Azure DevOps Build/Release Configuration Strong troubleshooting and communication skills Experience working with US clients is a plus Required Qualifications: B.Tech / B.E / MCA / M.Tech or equivalent Minimum 6+ years of relevant hands-on experience Must be willing to work onsite in Hyderabad Excellent communication (verbal & written)
Posted 3 weeks ago
6.0 - 8.0 years
10 - 14 Lacs
Ahmedabad
Remote
Hiring Senior Azure Data Architect & Performance Engineer (Remote, 6 PM–3 AM IST). Expert in SQL Server, Azure, T-SQL, PowerShell, performance tuning, Oracle to SQL migration, Snowflake. 6–8 yrs exp. Strong DB internals & Azure skills required.
Posted 3 weeks ago
4.0 - 8.0 years
6 - 12 Lacs
Bengaluru
Work from Office
4+yrs Data science,ML frameworks,MLOps, Python, Data engineering,Cloud platforms,Edge computing,Data manipulation libraries,Model development,Object,Experiment, design,Ab testing. Reach me at mailcv108@gmail.com or WhatsApp me at +91 9611702105
Posted 3 weeks ago
2.0 - 7.0 years
15 - 20 Lacs
Hyderabad
Work from Office
Roles and Responsibilities Design, develop, and deploy advanced AI models with a focus on generative AI, including transformer architectures (e.g., GPT, BERT, T5) and other deep learning models used for text, image, or multimodal generation. Work with extensive and complex datasets, performing tasks such as cleaning, preprocessing, and transforming data to meet quality and relevance standards for generative model training. Collaborate with cross-functional teams (e.g., product, engineering, data science) to identify project objectives and create solutions using generative AI tailored to business needs. Implement, fine-tune, and scale generative AI models in production environments, ensuring robust model performance and efficient resource utilization. Develop pipelines and frameworks for efficient data ingestion, model training, evaluation, and deployment, including A/B testing and monitoring of generative models in production. Stay informed about the latest advancements in generative AI research, techniques, and tools, applying new findings to improve model performance, usability, and scalability. Document and communicate technical specifications, algorithms, and project outcomes to technical and non-technical stakeholders, with an emphasis on explainability and responsible AI practices. Qualifications Required Educational Background : Bachelors or Masters degree in Computer Science, Data Science, AI/ML, or a related field. Relevant Ph.D. or research experience in generative AI is a plus. Experience : 2 - 11 Years of experience in machine learning, with 2+ years in designing and implementing generative AI models or working specifically with transformer-based models. Skills and Experience Required Generative AI : Transformer Models, GANs, VAEs, Text Generation, Image Generation Machine Learning : Algorithms, Deep Learning, Neural Networks Programming : Python, SQL; familiarity with libraries such as Hugging Face Transformers, PyTorch, TensorFlow MLOps : Docker, Kubernetes, MLflow, Cloud Platforms (AWS, GCP, Azure) ? Data Engineering : Data Preprocessing, Feature Engineering, Data Cleaning Why you'll love working with us: BRING YOUR PASSION AND FUN . Corporate culture woven from highly diverse perspectives and insights. BALANCE WORK AND PERSONAL TIME LIKE A BOSS . Resources and flexibility to more easily integrate your work and your life. BECOME A CERTIFIED SMARTY PANTS . Ongoing training and development opportunities for even the most insatiable learner. START-UP SPIRIT (Good ten plus years, yet we maintain it) FLEXIBLE WORKING HOURS
Posted 3 weeks ago
4.0 - 9.0 years
6 - 10 Lacs
Hyderabad
Work from Office
Overview The Sr. Software Engineer will be part of a team of some of the best and brightest in the industry who are focused on full-cycle development of scalable web and responsive applications that touch our growing customer base every day. As part of the Labs team, you will work collaboratively with agile team members to design new system functionality and to research and remedy complex issues as they arise, embodying a passion for continuous improvement and test-driven development. Responsibilities Design, develop, and maintain scalable data pipelines and ETL processes. Collaborate with software engineers, data scientists, and product managers to understand data requirements and provide tailored solutions. Optimize and enhance the performance of our data infrastructure to support analytics and reporting. Implement and maintain data governance and security best practices. Troubleshoot and resolve data-related issues and ensure data quality and integrity. Mentor and guide junior data engineers, fostering a culture of continuous learning and improvement. Qualifications Bachelor’s or master’s degree in computer science, Engineering, or a related field. 3+ years of experience in data engineering or a similar role. Proficiency in SQL and experience with relational databases (e.g., PostgreSQL, MySQL). Strong programming skills in Python Familiarity with cloud platforms (e.g., AWS, GCP, Azure) and containerization (e.g., Docker). Excellent problem-solving skills and attention to detail. Strong communication and collaboration skills.
Posted 3 weeks ago
8.0 - 10.0 years
9 - 13 Lacs
Bengaluru
Work from Office
What you’ll be doing: Assist in developing machine learning models based on project requirements Work with datasets by preprocessing, selecting appropriate data representations, and ensuring data quality. Performing statistical analysis and fine-tuning using test results. Support training and retraining of ML systems as needed. Help build data pipelines for collecting and processing data efficiently. Follow coding and quality standards while developing AI/ML solutions Contribute to frameworks that help operationalize AI models What we seek in you: 8+ years of experience in IT Industry Strong on programming languages like Python One cloud hands-on experience (GCP preferred) Experience working with Dockers Environments managing (e.g venv, pip, poetry, etc.) Experience with orchestrators like Vertex AI pipelines, Airflow, etc Understanding of full ML Cycle end-to-end Data engineering, Feature Engineering techniques Experience with ML modelling and evaluation metrics Experience with Tensorflow, Pytorch or another framework Experience with Models monitoring Advance SQL knowledge Aware of Streaming concepts like Windowing, Late arrival, Triggers etc Storage: CloudSQL, Cloud Storage, Cloud Bigtable, Bigquery, Cloud Spanner, Cloud DataStore, Vector database Ingest: Pub/Sub, Cloud Functions, AppEngine, Kubernetes Engine, Kafka, Micro services Schedule: Cloud Composer, Airflow Processing: Cloud Dataproc, Cloud Dataflow, Apache Spark, Apache Flink CI/CD: Bitbucket+Jenkins / Gitlab, Infrastructure as a tool: Terraform Life at Next: At our core, we're driven by the mission of tailoring growth for our customers by enabling them to transform their aspirations into tangible outcomes. We're dedicated to empowering them to shape their futures and achieve ambitious goals. To fulfil this commitment, we foster a culture defined by agility, innovation, and an unwavering commitment to progress. Our organizational framework is both streamlined and vibrant, characterized by a hands-on leadership style that prioritizes results and fosters growth. Perks of working with us: Clear objectives to ensure alignment with our mission, fostering your meaningful contribution. Abundant opportunities for engagement with customers, product managers, and leadership. You'll be guided by progressive paths while receiving insightful guidance from managers through ongoing feedforward sessions. Cultivate and leverage robust connections within diverse communities of interest. Choose your mentor to navigate your current endeavors and steer your future trajectory. Embrace continuous learning and upskilling opportunities through Nexversity. Enjoy the flexibility to explore various functions, develop new skills, and adapt to emerging technologies. Embrace a hybrid work model promoting work-life balance. Access comprehensive family health insurance coverage, prioritizing the well-being of your loved ones. Embark on accelerated career paths to actualize your professional aspirations. Who we are? We enable high growth enterprises build hyper personalized solutions to transform their vision into reality. With a keen eye for detail, we apply creativity, embrace new technology and harness the power of data and AI to co-create solutions tailored made to meet unique needs for our customers. Join our passionate team and tailor your growth with us!
Posted 3 weeks ago
7.0 - 12.0 years
18 - 30 Lacs
Chennai
Hybrid
Hi, We have vacancy for Sr. Data engineer. We are seeking an experienced Senior Data Engineer to join our dynamic team. The ideal candidate will be responsible for Design and implement the data engineering framework. Responsibilities Strong Skill in Big Query, GCP Cloud Data Fusion (for ETL/ELT) and PowerBI. Need to have strong skill in Data Pipelines Able to work with Power BI and Power BI Reporting Design and implement the data engineering framework and data pipelines using Databricks and Azure Data Factory. Document the high-level design components of the Databricks data pipeline framework. Evaluate and document the current dependencies on the existing DEI toolset and agree a migration plan. Lead on the Design and implementation an MVP Databricks framework. Document and agree an aligned set of standards to support the implementation of a candidate pipeline under the new framework. Support integrating a test automation approach to the Databricks framework in conjunction with the test engineering function to support CI/CD and automated testing. Support the development teams capability building by establishing an L&D and knowledge transition approach. Support the implementation of data pipelines against the new framework in line with the agreed migration plan. Ensure data quality management including profiling, cleansing and deduplication to support build of data products for clients Skill Set Experience working in Azure Cloud using Azure SQL, Azure Databricks, Azure Data Lake, Delta Lake, and Azure DevOps. Proficient in Python, Pyspark and SQL coding skills. Profiling data and data modelling experience on large data transformation projects creating data products and data pipelines. Creating data management frameworks and data pipelines which are metadata and business rules driven using Databricks. Experience of reviewing datasets for data products in terms of data quality management and populating data schemas set by Data Modellers Experience with data profiling, data quality management and data cleansing tools. Immediate joining or short notice is required. Pls Call varsha 7200847046 for more info Thanks, varsha 7200847046
Posted 3 weeks ago
6.0 - 11.0 years
13 - 18 Lacs
Ahmedabad
Work from Office
About the Company e.l.f. Beauty, Inc. stands with every eye, lip, face and paw. Our deep commitment to clean, cruelty free beauty at an incredible value has fueled the success of our flagship brand e.l.f. Cosmetics since 2004 and driven our portfolio expansion. Today, our multi-brand portfolio includes e.l.f. Cosmetics, e.l.f. SKIN, pioneering clean beauty brand Well People, Keys Soulcare, a groundbreaking lifestyle beauty brand created with Alicia Keys and Naturium, high-performance, biocompatible, clinically-effective and accessible skincare. In our Fiscal year 24, we had net sales of $1 Billion and our business performance has been nothing short of extraordinary with 24 consecutive quarters of net sales growth. We are the #2 mass cosmetics brand in the US and are the fastest growing mass cosmetics brand among the top 5. Our total compensation philosophy offers every full-time new hire competitive pay and benefits, bonus eligibility (200% of target over the last four fiscal years), equity, flexible time off, year-round half-day Fridays, and a hybrid 3 day in office, 2 day at home work environment. We believe the combination of our unique culture, total compensation, workplace flexibility and care for the team is unmatched across not just beauty but any industry. Visit our Career Page to learn more about our team: https://www.elfbeauty.com/work-with-us Job Summary: We’re looking for a strategic and technically strong Senior Data Architect to join our high-growth digital team. The selected person will play a critical role in shaping the company’s global data architecture and vision. The ideal candidate will lead enterprise-level architecture initiatives, collaborate with engineering and business teams, and guide a growing team of engineers and QA professionals. This role involves deep engagement across domains including Marketing, Product, Finance, and Supply Chain, with a special focus on marketing technology and commercial analytics relevant to the CPG/FMCG industry. The candidate should bring a hands-on mindset, a proven track record in designing scalable data platforms, and the ability to lead through influence. An understanding of industry-standard frameworks (e.g., TOGAF), tools like CDPs, MMM platforms, and AI-based insights generation will be a strong plus. Curiosity, communication, and architectural leadership are essential to succeed in this role. Key Responsibilities Enterprise Data Strategy: Design, define and maintain a holistic data strategy & roadmap that aligns with corporate objectives and fuels digital transformation. Ensure data architecture and products aligns with enterprise standards and best practices. Data Governance & Quality: Establish scalable governance frameworks to ensure data accuracy, privacy, security, and compliance (e.g., GDPR, CCPA). Oversee quality, security and compliance initiatives Data Architecture & Platforms: Oversee modern data infrastructure (e.g., data lakes, warehouses, streaming) with technologies like Snowflake, Databricks, AWS, and Kafka. Marketing Technology Integration: Ensure data architecture supports marketing technologies and commercial analytics platforms (e.g., CDP, MMM, ProfitSphere) tailored to the CPG/FMCG industry. Architectural Leadership: Act as a hands-on architect with the ability to lead through influence. Guide design decisions aligned with industry best practices and e.l.f.'s evolving architecture roadmap. Cross-Functional Collaboration: Partner with Marketing, Supply Chain, Finance, R&D, and IT to embed data-driven practices and deliver business impact. Lead integration of data from multiple sources to unified data warehouse. Cloud Optimization : Optimize data flows, storage for performance and scalability. Lead data migration priorities, manage metadata repositories and data dictionaries. Optimise databases and pipelines for efficiency. Manage and track quality, cataloging and observability AI/ML Enablement: Drive initiatives to operationalize predictive analytics, personalization, demand forecasting, and more using AI/ML models. Evaluate emerging data technologies and tools to improve data architecture. Team Leadership: Lead, mentor, and enable high-performing team of data engineers, analysts, and partners through influence and thought leadership. Vendor & Tooling Strategy: Manage relationships with external partners and drive evaluations of data and analytics tools. Executive Reporting: Provide regular updates and strategic recommendations to executive leadership and key stakeholders. Data Enablement : Design data models, database structures, and data integration solutions to support large volumes of data. Qualifications and Requirements Bachelor's or Master's degree in Computer Science, Information Systems, or a related field 18+ years of experience in Information Technology 8+ years of experience in data architecture, data engineering, or a related field, with a focus on large-scale, distributed systems. Strong understanding of data use cases in the CPG/FMCG sector. Experience with tools such as MMM (Marketing Mix Modeling), CDPs, ProfitSphere, or inventory analytics preferred. Awareness of architecture frameworks like TOGAF. Certifications are not mandatory, but candidates must demonstrate clear thinking and experience in applying architecture principles. Must possess excellent communication skills and a proven ability to work cross-functionally across global teams. Should be capable of leading with influence, not just execution. Knowledge of data warehousing, ETL/ELT processes, and data modeling Deep understanding of data modeling principles, including schema design and dimensional data modeling. Strong SQL development experience including SQL Queries and stored procedures Ability to architect and develop scalable data solutions, staying ahead of industry trends and integrating best practices in data engineering. Familiarity with data security and governance best practices Experience with cloud computing platforms such as Snowflake, AWS, Azure, or GCP Excellent problem-solving abilities with a focus on data analysis and interpretation. Strong communication and collaboration skills. Ability to translate complex technical concepts into actionable business strategies. Proficiency in one or more programming languages such as Python, Java, or Scala This job description is intended to describe the general nature and level of work being performed in this position. It also reflects the general details considered necessary to describe the principal functions of the job identified, and shall not be considered, as detailed description of all the work required inherent in the job. It is not an exhaustive list of responsibilities, and it is subject to changes and exceptions at the supervisors’ discretion. e.l.f. Beauty respects your privacy. Please see our Job Applicant Privacy Notice (www.elfbeauty.com/us-job-applicant-privacy-notice) for how your personal information is used and shared.
Posted 3 weeks ago
7.0 - 12.0 years
25 - 40 Lacs
Pune
Work from Office
Experience as a Data Analyst with GCP & Hadoop is mandatory. Work From Office
Posted 3 weeks ago
5.0 - 7.0 years
7 - 9 Lacs
Hyderabad, Ahmedabad, Gurugram
Work from Office
Requirements:What Youll Do:Analyse Business Requirements: Understand and evaluate client needs. Data Model Analysis & GAP Analysis: Review the current data model and identify gaps relative to business requirements. Design BI Schema: Develop Data Warehouse/Business Intelligence schemas. Data Transformation: Utilize Power BI, Tableau, SQL, or ETL tools to transform data. Create Reports & Dashboards: Develop interactive reports and dashboards with calculated formulas. SQL Expertise: Write complex SQL queries and stored procedures. Design BI Solutions: Architect effective business intelligence solutions tailored to business needs.Team Management: Lead and guide a team of BI developers.Data Integration: Integrate data from multiple sources into BI tools for comprehensive analysis. Performance Optimization: Ensure reports and dashboards run efficiently. Stakeholder Collaboration: Work with stakeholders to align BI projects with business goals. Mandatory Knowledge: In-depth knowledge of Data Warehousing; Data Engineering is a plus.What Youll Bring:Educational Qualification: B.Tech in Computer Science or equivalent. Experience: Minimum 7+ years of relevant experience.Share your resume with details on current CTC, expected CTC, and preferred location. Location - Hyderabad, Ahmedabad, Gurgaon, Indore (India)
Posted 3 weeks ago
3.0 - 8.0 years
5 - 10 Lacs
Gurugram
Work from Office
Data Engineer to be a key player in building a robust data infrastructure and flows that supports our advanced forecasting models. Your initial focus will be to create a robust data factory to ensure smooth collection and refresh of actual data, a critical component that feeds our forecast. Additionally, you will be assisting in developing mathematical models and supporting the work of ML engineers and data scientists. Your work will significantly impact our ability to deliver timely and insightful forecasts to our clients. Whats in it for you: Opportunity to build foundational data infrastructure that directly impacts advanced forecasting models and client delivery. Gain exposure to and support the development of sophisticated mathematical models, Machine Learning, and Data Science applications. Contribute significantly to delivering timely and insightful forecasts, influencing client decisions in the automotive sector. Work in a collaborative environment that fosters continuous learning, mentorship, and professional growth in data engineering and related analytical fields. Responsibilities: Data Pipeline Development: Design, build, and maintain scalable and reliable data pipelines for efficient data ingestion, processing, and storage, primarily focusing on creating a data factory for our core forecasting data. Data Quality and Integrity: Implement robust data quality checks and validation processes to ensure the accuracy and consistency of data used in our forecasting models. Mathematical Model Support: Collaborate with other data engineers to develop and refine mathematical logics and models that underpin our forecasting methodologies. ML and Data Science Support : Provide data support to our Machine Learning Engineers and Data Scientists. Collaboration and Communication: Work closely with analysts, developers, and other stakeholders to understand data requirements and deliver effective solutions. Innovation and Improvement: Continuously explore and evaluate new technologies and methodologies to enhance our data infrastructure and forecasting capabilities. What Were Looking For: Bachelor's or Master's degree in Computer Science, Data Engineering, or a related field. Minimum of 3 years of experience in data engineering, with a proven track record of building and maintaining data pipelines. Strong proficiency in SQL and experience with relational and non-relational databases. Strong Python programming skills, with experience in data manipulation and processing libraries (e.g., Pandas, NumPy). Experience with mathematical modelling and supporting ML and data science teams. Experience with cloud platforms (e.g., AWS, Azure, GCP) and cloud-based data services. Strong communication and collaboration skills, with the ability to work effectively in a team environment. Experience in the automotive sector is a plus.
Posted 3 weeks ago
12.0 - 16.0 years
45 - 50 Lacs
Mumbai, Maharastra
Work from Office
Associate Director, Data Engineering (J2EE/Angular/React Full Stack Individual Contributor) Responsibilities: Architect, design, and implement innovative software solutions to enhance S&P Ratings' cloud-based analytics platform. Mentor a team of engineers (as required), fostering a culture of trust, continuous growth, and collaborative problem-solving. Collaborate with business partners to understand requirements, ensuring technical solutions align with business goals. Manage and improve existing software solutions, ensuring high performance and scalability. Participate actively in all Agile scrum ceremonies, contributing to the continuous improvement of team processes. Produce comprehensive technical design documents and conduct technical walkthroughs. Experience & Qualifications: Bachelors degree in computer science, Information Systems, Engineering, equivalent or more is required Proficient with software development lifecycle (SDLC) methodologies like Agile, Test-driven development Total 12+ years of experience with 8+ years designing enterprise products, modern data stacks and analytics platforms 6+ years of hands-on experience contributing to application architecture & designs, proven software/enterprise integration design patterns and full-stack knowledge including modern distributed front end and back-end technology stacks 5+ years full stack development experience in modern web development technologies, Java/J2EE, UI frameworks like Angular, React, SQL, Oracle, NoSQL Databases like MongoDB Exp. with Delta Lake systems like Databricks using AWS cloud technologies and PySpark is a plus Experience designing transactional/data warehouse/data lake and data integrations with Big data eco system leveraging AWS cloud technologies Thorough understanding of distributed computing Passionate, smart, and articulate developer Quality first mindset with a strong background and experience with developing products for a global audience at scale Excellent analytical thinking, interpersonal, oral and written communication skills with strong ability to influence both IT and business partners Superior knowledge of system architecture, object-oriented design, and design patterns. Good work ethic, self-starter, and results-oriented Excellent communication skills are essential, with strong verbal and writing proficiencies Additional Preferred Qualifications: Experience working AWS Experience with SAFe Agile Framework Bachelor's/PG degree in Computer Science, Information Systems or equivalent. Hands-on experience contributing to application architecture & designs, proven software/enterprise integration design principles Ability to prioritize and manage work to critical project timelines in a fast-paced environment Excellent Analytical and communication skills are essential, with strong verbal and writing proficiencies Ability to train and mentor
Posted 3 weeks ago
10.0 - 15.0 years
25 - 40 Lacs
Mumbai
Work from Office
Overview of the Company: Jio Platforms Ltd. is a revolutionary Indian multinational tech company, often referred to as India's biggest startup, headquartered in Mumbai. Launched in 2019, it's the powerhouse behind Jio, India's largest mobile network with over 400 million users. But Jio Platforms is more than just telecom. It's a comprehensive digital ecosystem, developing cutting-edge solutions across media, entertainment, and enterprise services through popular brands like JioMart, JioFiber, and JioSaavn. Join us at Jio Platforms and be part of a fast-paced, dynamic environment at the forefront of India's digital transformation. Collaborate with brilliant minds to develop next-gen solutions that empower millions and revolutionize industries. Team Overview: The Data Platforms Team is the launchpad for a data-driven future, empowering the Reliance Group of Companies. We're a passionate group of experts architecting an enterprise-scale data mesh to unlock the power of big data, generative AI, and ML modelling across various domains. We don't just manage data we transform it into intelligent actions that fuel strategic decision-making. Imagine crafting a platform that automates data flow, fuels intelligent insights, and empowers the organization that's what we do. Join our collaborative and innovative team, and be a part of shaping the future of data for India's biggest digital revolution! About the role. Title : Lead Data Engineer Location: Mumbai Responsibilities: End-to-End Data Pipeline Development: Design, build, optimize, and maintain robust data pipelines across cloud, on-premises, or hybrid environments, ensuring performance, scalability, and seamless data flow. Reusable Components & Frameworks: Develop reusable data pipeline components and contribute to the team's data pipeline framework evolution. Data Architecture & Solutions: Contribute to data architecture design, applying data modelling, storage, and retrieval expertise. Data Governance & Automation: Champion data integrity, security, and efficiency through metadata management, automation, and data governance best practices. Collaborative Problem Solving: Partner with stakeholders, data teams, and engineers to define requirements, troubleshoot, optimize, and deliver data-driven insights. Mentorship & Knowledge Transfer: Guide and mentor junior data engineers, fostering knowledge sharing and professional growth. Qualification Details: Education: Bachelor's degree or higher in Computer Science, Data Science, Engineering, or a related technical field. Core Programming: Excellent command of a primary data engineering language (Scala, Python, or Java) with a strong foundation in OOPS and functional programming concepts. Big Data Technologies: Hands-on experience with data processing frameworks (e.g., Hadoop, Spark, Apache Hive, NiFi, Ozone, Kudu), ideally including streaming technologies (Kafka, Spark Streaming, Flink, etc.). Database Expertise: Excellent querying skills (SQL) and strong understanding of relational databases (e.g., MySQL, PostgreSQL). Experience with NoSQL databases (e.g., MongoDB, Cassandra) is a plus. End-to-End Pipelines: Demonstrated experience in implementing, optimizing, and maintaining complete data pipelines, integrating varied sources and sinks including streaming real-time data. Cloud Expertise: Knowledge of Cloud Technologies like Azure HDInsights, Synapse, EventHub and GCP DataProc, Dataflow, BigQuery. CI/CD Expertise: Experience with CI/CD methodologies and tools, including strong Linux and shell scripting skills for automation. Desired Skills & Attributes: Problem-Solving & Troubleshooting: Proven ability to analyze and solve complex data problems, troubleshoot data pipeline issues effectively. Communication & Collaboration: Excellent communication skills, both written and verbal, with the ability to collaborate across teams (data scientists, engineers, stakeholders). Continuous Learning & Adaptability: A demonstrated passion for staying up-to-date with emerging data technologies and a willingness to adapt to new tools.
Posted 3 weeks ago
3.0 - 8.0 years
9 - 13 Lacs
Bengaluru
Work from Office
Role & responsibilities Develop, test, and maintain high-quality Python applications. Write efficient, reusable, and reliable code following best practices. Optimize applications for performance, scalability, and security. Collaborate with cross-functional teams to define, design, and deliver new features. Design and manage databases using SQL to support application functionality. Think from a customer's perspective to provide the best user experience. Participate in code reviews to maintain code quality and share knowledge. Troubleshoot and resolve software defects and issues. Continuously learn and apply new technologies and best practices. Propose and influence architectural decisions to ensure the scalability and performance of the application. Work in agile/iterative software development teams with a DevOps setup. Requirements: Bachelors degree in BE/B.Tech, BSc, BCA, or equivalent. Experience in data engineering domain At least 6 months of professional experience in Python development. Experience developing and implementing robust back-end functionalities, including data processing, APIs, and integrations with external systems. Strong problem-solving skills. Self-motivated and able to work independently as well as part of a team. Familiarity with Git and CI/CD pipelines using Bitbucket/GitHub. Solid understanding of API design, REST API, and GraphQL. Knowledge of unit testing. Good to have Hands-on experience with AWS services, including Lambda, Glue, SQS, AppSync, API Gateway, and Aurora RDS, and experience with AWS serverless deployment. Hands-on experience with AWS and Infrastructure as Code (IaC). Preferred candidate profile
Posted 3 weeks ago
3.0 - 8.0 years
15 - 22 Lacs
Gurugram
Work from Office
Data Engineer Exp : 4 years of Experience Years of Minimum Relevant : 3+ Years in data engineering Location- Gurgaon Role Summary: The Data Engineer will develop and maintain AWS-based data pipelines, ensuring optimal ingestion, transformation, and storage of clinical trial data. The role requires expertise in ETL, AWS Glue, Lambda functions, and Redshift optimization. Must have- AWS (Glue, Lambda, Redshift, Step Functions) Python, SQL, API-based ingestion Pyspark Redshift, SQL/PostgreSQL, Snowflake (optional) Redshift Query Optimization, Indexing IAM, Encryption, Row-Level Security
Posted 3 weeks ago
10.0 - 14.0 years
15 - 20 Lacs
Indore, Hyderabad, Ahmedabad
Work from Office
Technical Architect / Solution Architect / Data Architect (Data Analytics) Notice Period: Immediate to 15 Days Job Summary: We are looking for a highly technical and experienced Data Architect / Solution Architect / Technical Architect with expertise in Data Analytics. The candidate should have strong hands-on experience in solutioning, architecture, and cloud technologies to drive data-driven decisions. Key Responsibilities: Design, develop, and implement end-to-end data architecture solutions. Provide technical leadership in Azure, Databricks, Snowflake, and Microsoft Fabric. Architect scalable, secure, and high-performing data solutions. Work on data strategy, governance, and optimization. Implement and optimize Power BI dashboards and SQL-based analytics. Collaborate with cross-functional teams to deliver robust data solutions. Primary Skills Required: Data Architecture & Solutioning Azure Cloud (Data Services, Storage, Synapse, etc.) Databricks & Snowflake (Data Engineering & Warehousing) Power BI (Visualization & Reporting) Microsoft Fabric (Data & AI Integration) SQL (Advanced Querying & Optimization) Contact: 9032956160 Looking for immediate to 15-day joiners
Posted 3 weeks ago
2.0 - 6.0 years
4 - 8 Lacs
Chennai, Delhi / NCR, Bengaluru
Work from Office
Job Summary: Were looking for a SAS Data Integration (DI) Developer to join our team. In this role, you'll be responsible for designing and optimizing ETL processes using SAS DI Studio, as well as automating job schedules. Youll work with large datasets, ensure data integrity, and collaborate with cross-functional teams to meet business requirements. Additionally, you'll troubleshoot and monitor jobs while ensuring adherence to data governance standards. If you're excited to work in a dynamic and growing environment, this is a great opportunity to apply your skills to cutting-edge technologies. Key Responsibilities: Develop and maintain ETL processes using SAS DI Studio. Design and optimize data workflows, including transformations and macros for high-performance data integration. Schedule and automate jobs using SAS Management Console or other scheduling tools (e.g., Control-M, cron). Monitor, troubleshoot, and resolve issues related to scheduled jobs and ETL processes. Work with large datasets, ensuring data integrity and optimizing performance. Collaborate with teams to meet business requirements and improve data workflows. Create documentation for data integration processes and job schedules. Ensure compliance with data governance and security best practices. Qualifications: Experience with SAS DI Studio, SAS programming, and ETL processes. Expertise in job scheduling and automation using SAS Management Console, Control-M, or cron. Proficient in SQL, data transformation, and data quality assurance. Strong problem-solving and troubleshooting skills. Location: Delhi NCR,Bangalore,Chennai,Pune,Kolkata,Ahmedabad,Mumbai,Hyderabad
Posted 3 weeks ago
8.0 - 12.0 years
15 - 20 Lacs
Pune
Work from Office
We are looking for a highly experienced Lead Data Engineer / Data Architect to lead the design, development, and implementation of scalable data pipelines, data Lakehouse, and data warehousing solutions. The ideal candidate will provide technical leadership to a team of data engineers, drive architectural decisions, and ensure best practices in data engineering. This role is critical in enabling data-driven decision-making and modernizing our data infrastructure. Key Responsibilities: Act as a technical leader responsible for guiding the design, development, and implementation of data pipelines, data Lakehouse, and data warehousing solutions. Lead a team of data engineers, ensuring adherence to best practices and standards. Drive the successful delivery of high-quality, scalable, and reliable data solutions. Play a key role in shaping data architecture, adopting modern data technologies, and enabling data-driven decision-making across the team. Provide technical vision, guidance, and mentorship to the team. Lead technical design discussions, perform code reviews, and contribute to architectural decisions.
Posted 3 weeks ago
5.0 - 7.0 years
15 - 25 Lacs
Mumbai, Delhi / NCR, Bengaluru
Work from Office
About the Role: We are seeking a skilled and experienced Data Engineer to join our remote team. The ideal candidate will have 5-7 years of professional experience working with Python, PySpark, SQL, and Spark SQL, and will play a key role in building scalable data pipelines, optimizing data workflows, and supporting data-driven decision-making across the organization. Key Responsibilities: Design, build, and maintain scalable and efficient data pipelines using PySpark and SQL. Develop and optimize Spark jobs for large-scale data processing. Collaborate with data scientists, analysts, and other engineers to ensure data quality and accessibility. Implement data integration from multiple sources into a unified data warehouse or lake. Monitor and troubleshoot data pipelines and ETL jobs for performance and reliability. Ensure best practices in data governance, security, and compliance. Create and maintain technical documentation related to data pipelines and infrastructure. Location-Delhi NCR,Bangalore,Chennai,Pune,Kolkata,Ahmedabad,Mumbai,Hyderabad,Remote
Posted 3 weeks ago
6.0 - 9.0 years
9 - 18 Lacs
Pune, Chennai
Work from Office
Job Title: Data Engineer (Spark/Scala/Cloudera) Location: Chennai/Pune Job Type : Full time Experience Level: 6- 9 years Job Summary: We are seeking a skilled and motivated Data Engineer to join our data engineering team. The ideal candidate will have deep experience with Apache Spark, Scala, and Cloudera Hadoop ecosystem. You will be responsible for building scalable data pipelines, optimizing data processing workflows, and ensuring the reliability and performance of our big data platform. Key Responsibilities: Design, build, and maintain scalable and efficient ETL/ELT pipelines using Spark and Scala. Work with large-scale datasets on the Cloudera Data Platform (CDP). Collaborate with data scientists, analysts, and other stakeholders to ensure data availability and quality. Optimize Spark jobs for performance and resource utilization. Implement and maintain data governance, security, and compliance standards. Monitor and troubleshoot data pipeline failures and ensure high data reliability. Participate in code reviews, testing, and deployment activities. Document architecture, processes, and best practices. Required Skills and Qualifications: Bachelor's or Master's degree in Computer Science, Engineering, or related field. 6+ years of experience in big data engineering roles. 2 + Years of Hands on experience into Scala Proficient in Apache Spark (Core/DataFrame/SQL/RDD APIs). Strong programming skills in Scala. Hands-on experience with the Cloudera Hadoop ecosystem (e.g., HDFS, Hive, Impala, HBase, Oozie). Familiarity with distributed computing and data partitioning concepts. Strong understanding of data structures, algorithms, and software engineering principles. Experience with CI/CD pipelines and version control systems (e.g., Git). Familiarity with cloud platforms (AWS, Azure, or GCP) is a plus. Preferred Qualifications: Experience with Cloudera Manager and Cloudera Navigator. Exposure to Kafka, NiFi, or Airflow. Familiarity with data lake, data warehouse, and lakehouse architectures. Preferred candidate profile
Posted 3 weeks ago
8.0 - 13.0 years
18 - 22 Lacs
Hyderabad
Remote
Roles: SQL Data Engineer - ETL, DBT & Snowflake Specialist Location: Remote Duration: 14+ Months Timings: 5:30pm IST 1:30am IST Note: Immediate Joiners Only Required Experience: Advanced SQL Proficiency Writing and optimizing complex queries, stored procedures, functions, and views. Experience with query performance tuning and database optimization. ETL/ELT Development Building, and maintaining ETL/ELT pipelines. Familiarity with ETL tools or processes and orchestration frameworks. Data Modeling Designing and implementing data models Understanding of dimensional modeling and normalization. Snowflake Expertise Hands-on experience with Snowflakes architecture and features Experience with Snowflake database, schema, procedures, functions. DBT (Data Build Tool) Building data models, transformations using DBT. Implementing DBT best practices including testing, documentation, and CI/CD integration. Programming and Automation Proficiency in Python is a plus. Experience with version control systems (e.g., Git, Azure DevOps). Experience with Agile methodologies and DevOps practices. Collaboration and Communication Working effectively with data analysts, and business stakeholders. Translating technical concepts into clear, actionable insights. Prior experience in a fast-paced, data-driven environment.
Posted 3 weeks ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
20312 Jobs | Dublin
Wipro
11977 Jobs | Bengaluru
EY
8165 Jobs | London
Accenture in India
6667 Jobs | Dublin 2
Uplers
6462 Jobs | Ahmedabad
Amazon
6351 Jobs | Seattle,WA
Oracle
5993 Jobs | Redwood City
IBM
5803 Jobs | Armonk
Capgemini
3897 Jobs | Paris,France
Tata Consultancy Services
3776 Jobs | Thane