Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
5.0 years
0 Lacs
Indore, Madhya Pradesh, India
On-site
Project Role : Data Platform Engineer Project Role Description : Assists with the data platform blueprint and design, encompassing the relevant data platform components. Collaborates with the Integration Architects and Data Architects to ensure cohesive integration between systems and data models. Must have skills : Databricks Unified Data Analytics Platform Good to have skills : NA Minimum 5 Year(s) Of Experience Is Required Educational Qualification : 15 years full time education Summary: As a Data Platform Engineer, you will assist with the data platform blueprint and design, encompassing the relevant data platform components. Your typical day will involve collaborating with Integration Architects and Data Architects to ensure cohesive integration between systems and data models, while also engaging in discussions to refine and enhance the overall data architecture strategy. You will be involved in various stages of the data platform lifecycle, ensuring that all components work seamlessly together to support the organization's data needs and objectives. Roles & Responsibilities: - Expected to be an SME. - Collaborate and manage the team to perform. - Responsible for team decisions. - Engage with multiple teams and contribute on key decisions. - Provide solutions to problems for their immediate team and across multiple teams. - Facilitate knowledge sharing sessions to enhance team capabilities. - Monitor and evaluate team performance to ensure alignment with project goals. Professional & Technical Skills: - Must To Have Skills: Proficiency in Databricks Unified Data Analytics Platform. - Strong understanding of data integration techniques and best practices. - Experience with cloud-based data solutions and architectures. - Familiarity with data governance frameworks and compliance standards. - Ability to troubleshoot and optimize data workflows for efficiency. Additional Information: - The candidate should have minimum 5 years of experience in Databricks Unified Data Analytics Platform. - This position is based at our Indore office. - A 15 years full time education is required. 15 years full time education Show more Show less
Posted 1 week ago
3.0 years
0 Lacs
Indore, Madhya Pradesh, India
On-site
Project Role : Data Engineer Project Role Description : Design, develop and maintain data solutions for data generation, collection, and processing. Create data pipelines, ensure data quality, and implement ETL (extract, transform and load) processes to migrate and deploy data across systems. Must have skills : Databricks Unified Data Analytics Platform Good to have skills : NA Minimum 3 Year(s) Of Experience Is Required Educational Qualification : 15 years full time education Summary: As a Data Engineer, you will design, develop, and maintain data solutions that facilitate data generation, collection, and processing. Your typical day will involve creating data pipelines, ensuring data quality, and implementing ETL processes to migrate and deploy data across various systems. You will collaborate with cross-functional teams to understand data requirements and contribute to the overall data strategy, ensuring that the data architecture aligns with business objectives and supports analytical needs. Roles & Responsibilities: - Expected to perform independently and become an SME. - Required active participation/contribution in team discussions. - Contribute in providing solutions to work related problems. - Assist in the design and implementation of data architecture to support data initiatives. - Monitor and optimize data pipelines for performance and reliability. Professional & Technical Skills: - Must To Have Skills: Proficiency in Databricks Unified Data Analytics Platform. - Strong understanding of data modeling and database design principles. - Experience with ETL tools and processes. - Familiarity with cloud platforms and services related to data storage and processing. - Knowledge of programming languages such as Python or Scala for data manipulation. Additional Information: - The candidate should have minimum 3 years of experience in Databricks Unified Data Analytics Platform. - This position is based at our Indore office. - A 15 years full time education is required. 15 years full time education Show more Show less
Posted 1 week ago
0.0 years
0 Lacs
Indore, Madhya Pradesh
On-site
AV-230749 Indore,Madhya Pradesh,India Full-time Permanent Global Business Services DHL INFORMATION SERVICES (INDIA) LLP Your IT Future, Delivered Senior Software Engineer (Azure BI) Open to all PAN India candidates. With a global team of 5800 IT professionals, DHL IT Services connects people and keeps the global economy running by continuously innovating and creating sustainable digital solutions. We work beyond global borders and push boundaries across all dimensions of logistics. You can leave your mark shaping the technology backbone of the biggest logistics company of the world. Our offices in Cyberjaya, Prague, and Chennai have earned #GreatPlaceToWork certification, reflecting our commitment to exceptional employee experiences. Digitalization. Simply delivered. At IT Services, we are passionate about Azure Databricks and PySpark. Our PnP BI Solutions team is continuously expanding. No matter your level of Software Engineer Azure BI proficiency, you can always grow within our diverse environment. #DHL #DHLITServices #GreatPlace #pyspark #azuredatabricks #snowflakedatabase Grow together Timely delivery of DHL packages around the globe in a way that ensures customer data are secure is in the core of what we do. You will provide project deliverables and day-to-day operation support and help investigate and resolve incidents. Sometimes, requirements or issues might get tricky, and this is where your expertise in development or the cooperation on troubleshooting with other IT support teams and specialists will come into play. For any requirements regarding BI use cases in an Azure environment, you are our superhero. The same applies when it comes to production and incidents that need to be fixed. Ready to embark on the journey? Here’s what we are looking for: Practical experience in programming using SQL, PySpark(Python), Azure Databricks and Azure Data Factory Experience in administration and configuration of Databricks Cluster Experience with Snowflake Database Knowledge of Data Vault data modeling (if not: high motivation to learn the modeling approach). Experiences with Streaming APIs like Kafka, CI/CD, XML/JSON, ADLS2 A comprehensive understanding of public cloud platforms, with a preference for Microsoft Azure Proven ability to work in a multi-cultural environment An array of benefits for you: Flexible Work Guidelines. Flexible Compensation Structure. Global Work cultural & opportunities across geographies. Insurance Benefit - Health Insurance for family, parents & in-laws, Term Insurance (Life Cover), Accidental Insurance.
Posted 1 week ago
7.5 years
0 Lacs
Pune, Maharashtra, India
On-site
Project Role : Data Engineer Project Role Description : Design, develop and maintain data solutions for data generation, collection, and processing. Create data pipelines, ensure data quality, and implement ETL (extract, transform and load) processes to migrate and deploy data across systems. Must have skills : Databricks Unified Data Analytics Platform Good to have skills : NA Minimum 7.5 Year(s) Of Experience Is Required Educational Qualification : 15 years full time education Summary: As a Data Engineer, you will design, develop, and maintain data solutions that facilitate data generation, collection, and processing. Your typical day will involve creating data pipelines, ensuring data quality, and implementing ETL processes to migrate and deploy data across various systems. You will collaborate with cross-functional teams to understand their data needs and provide effective solutions, ensuring that the data infrastructure is robust and scalable to meet the demands of the organization. Roles & Responsibilities: - Expected to be an SME. - Collaborate and manage the team to perform. - Responsible for team decisions. - Engage with multiple teams and contribute on key decisions. - Provide solutions to problems for their immediate team and across multiple teams. - Mentor junior team members to enhance their skills and knowledge in data engineering. - Continuously evaluate and improve data processes to enhance efficiency and effectiveness. Professional & Technical Skills: - Must To Have Skills: Proficiency in Databricks Unified Data Analytics Platform. - Experience with data pipeline orchestration tools such as Apache Airflow or similar. - Strong understanding of ETL processes and data warehousing concepts. - Familiarity with cloud platforms like AWS, Azure, or Google Cloud. - Knowledge of programming languages such as Python or Scala for data manipulation. Additional Information: - The candidate should have minimum 7.5 years of experience in Databricks Unified Data Analytics Platform. - This position is based at our Pune office. - A 15 years full time education is required. 15 years full time education Show more Show less
Posted 1 week ago
5.0 years
0 Lacs
Navi Mumbai, Maharashtra, India
On-site
Project Role : Data Engineer Project Role Description : Design, develop and maintain data solutions for data generation, collection, and processing. Create data pipelines, ensure data quality, and implement ETL (extract, transform and load) processes to migrate and deploy data across systems. Must have skills : Databricks Unified Data Analytics Platform Good to have skills : NA Minimum 5 Year(s) Of Experience Is Required Educational Qualification : 15 years full time education Summary: As a Data Engineer, you will design, develop, and maintain data solutions that facilitate data generation, collection, and processing. Your typical day will involve creating data pipelines, ensuring data quality, and implementing ETL processes to migrate and deploy data across various systems. You will collaborate with cross-functional teams to understand their data needs and provide effective solutions, ensuring that the data infrastructure is robust and scalable to meet the demands of the organization. Roles & Responsibilities: - Expected to be an SME. - Collaborate and manage the team to perform. - Responsible for team decisions. - Engage with multiple teams and contribute on key decisions. - Provide solutions to problems for their immediate team and across multiple teams. - Mentor junior team members to enhance their skills and knowledge in data engineering. - Continuously evaluate and improve data processes to enhance efficiency and effectiveness. Professional & Technical Skills: - Must To Have Skills: Proficiency in Databricks Unified Data Analytics Platform. - Experience with data pipeline orchestration tools. - Strong understanding of ETL processes and data warehousing concepts. - Familiarity with data quality frameworks and best practices. - Knowledge of programming languages such as Python or Scala. Additional Information: - The candidate should have minimum 5 years of experience in Databricks Unified Data Analytics Platform. - This position is based in Mumbai. - A 15 years full time education is required. 15 years full time education Show more Show less
Posted 1 week ago
5.0 years
0 Lacs
Navi Mumbai, Maharashtra, India
On-site
Project Role : Data Engineer Project Role Description : Design, develop and maintain data solutions for data generation, collection, and processing. Create data pipelines, ensure data quality, and implement ETL (extract, transform and load) processes to migrate and deploy data across systems. Must have skills : Databricks Unified Data Analytics Platform Good to have skills : NA Minimum 5 Year(s) Of Experience Is Required Educational Qualification : 15 years full time education Summary: As a Data Engineer, you will design, develop, and maintain data solutions that facilitate data generation, collection, and processing. Your typical day will involve creating data pipelines, ensuring data quality, and implementing ETL processes to migrate and deploy data across various systems. You will collaborate with cross-functional teams to understand their data needs and provide effective solutions, ensuring that the data infrastructure is robust and scalable to meet the demands of the organization. Roles & Responsibilities: - Expected to be an SME. - Collaborate and manage the team to perform. - Responsible for team decisions. - Engage with multiple teams and contribute on key decisions. - Provide solutions to problems for their immediate team and across multiple teams. - Mentor junior team members to enhance their skills and knowledge in data engineering. - Continuously evaluate and improve data processes to enhance efficiency and effectiveness. Professional & Technical Skills: - Must To Have Skills: Proficiency in Databricks Unified Data Analytics Platform. - Experience with data pipeline development and management. - Strong understanding of ETL processes and data integration techniques. - Familiarity with data quality frameworks and best practices. - Knowledge of cloud-based data solutions and architectures. Additional Information: - The candidate should have minimum 5 years of experience in Databricks Unified Data Analytics Platform. - This position is based in Mumbai. - A 15 years full time education is required. 15 years full time education Show more Show less
Posted 1 week ago
7.5 years
0 Lacs
Mumbai Metropolitan Region
On-site
Project Role : Data Engineer Project Role Description : Design, develop and maintain data solutions for data generation, collection, and processing. Create data pipelines, ensure data quality, and implement ETL (extract, transform and load) processes to migrate and deploy data across systems. Must have skills : Databricks Unified Data Analytics Platform Good to have skills : NA Minimum 7.5 Year(s) Of Experience Is Required Educational Qualification : 15 years full time education Summary: As a Data Engineer, you will design, develop, and maintain data solutions that facilitate data generation, collection, and processing. Your typical day will involve creating data pipelines, ensuring data quality, and implementing ETL processes to migrate and deploy data across various systems. You will collaborate with cross-functional teams to understand their data needs and provide effective solutions, ensuring that the data infrastructure is robust and scalable to meet the demands of the organization. Roles & Responsibilities: - Expected to be an SME. - Collaborate and manage the team to perform. - Responsible for team decisions. - Engage with multiple teams and contribute on key decisions. - Provide solutions to problems for their immediate team and across multiple teams. - Mentor junior team members to enhance their skills and knowledge in data engineering. - Continuously evaluate and improve data processes to enhance efficiency and effectiveness. Professional & Technical Skills: - Must To Have Skills: Proficiency in Databricks Unified Data Analytics Platform. - Experience with data pipeline development and management. - Strong understanding of ETL processes and data integration techniques. - Familiarity with data quality frameworks and best practices. - Knowledge of cloud-based data solutions and architectures. Additional Information: - The candidate should have minimum 7.5 years of experience in Databricks Unified Data Analytics Platform. - This position is based in Mumbai. - A 15 years full time education is required. 15 years full time education Show more Show less
Posted 1 week ago
3.0 years
0 Lacs
Noida, Uttar Pradesh, India
On-site
Designation: - ML / MLOPs Engineer Location: - Noida (Sector- 132) Key Responsibilities: • Model Development & Algorithm Optimization : Design, implement, and optimize ML models and algorithms using libraries and frameworks such as TensorFlow , PyTorch , and scikit-learn to solve complex business problems. • Training & Evaluation : Train and evaluate models using historical data, ensuring accuracy, scalability, and efficiency while fine-tuning hyperparameters. • Data Preprocessing & Cleaning : Clean, preprocess, and transform raw data into a suitable format for model training and evaluation, applying industry best practices to ensure data quality. • Feature Engineering : Conduct feature engineering to extract meaningful features from data that enhance model performance and improve predictive capabilities. • Model Deployment & Pipelines : Build end-to-end pipelines and workflows for deploying machine learning models into production environments, leveraging Azure Machine Learning and containerization technologies like Docker and Kubernetes . • Production Deployment : Develop and deploy machine learning models to production environments, ensuring scalability and reliability using tools such as Azure Kubernetes Service (AKS) . • End-to-End ML Lifecycle Automation : Automate the end-to-end machine learning lifecycle, including data ingestion, model training, deployment, and monitoring, ensuring seamless operations and faster model iteration. • Performance Optimization : Monitor and improve inference speed and latency to meet real- time processing requirements, ensuring efficient and scalable solutions. • NLP, CV, GenAI Programming : Work on machine learning projects involving Natural Language Processing (NLP) , Computer Vision (CV) , and Generative AI (GenAI) , applying state-of-the-art techniques and frameworks to improve model performance. • Collaboration & CI/CD Integration : Collaborate with data scientists and engineers to integrate ML models into production workflows, building and maintaining continuous integration/continuous deployment (CI/CD) pipelines using tools like Azure DevOps , Git , and Jenkins . • Monitoring & Optimization : Continuously monitor the performance of deployed models, adjusting parameters and optimizing algorithms to improve accuracy and efficiency. • Security & Compliance : Ensure all machine learning models and processes adhere to industry security standards and compliance protocols , such as GDPR and HIPAA . • Documentation & Reporting : Document machine learning processes, models, and results to ensure reproducibility and effective communication with stakeholders. Required Qualifications: • Bachelor’s or Master’s degree in Computer Science, Engineering, Data Science, or a related field. • 3+ years of experience in machine learning operations (MLOps), cloud engineering, or similar roles. • Proficiency in Python , with hands-on experience using libraries such as TensorFlow , PyTorch , scikit-learn , Pandas , and NumPy . • Strong experience with Azure Machine Learning services, including Azure ML Studio , Azure Databricks , and Azure Kubernetes Service (AKS) . • Knowledge and experience in building end-to-end ML pipelines, deploying models, and automating the machine learning lifecycle. • Expertise in Docker , Kubernetes , and container orchestration for deploying machine learning models at scale. • Experience in data engineering practices and familiarity with cloud storage solutions like Azure Blob Storage and Azure Data Lake . • Strong understanding of NLP , CV , or GenAI programming, along with the ability to apply these techniques to real-world business problems. • Experience with Git , Azure DevOps , or similar tools to manage version control and CI/CD pipelines. • Solid experience in machine learning algorithms , model training , evaluation , and hyperparameter tuning Show more Show less
Posted 1 week ago
0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Your potential, unleashed. India’s impact on the global economy has increased at an exponential rate and Deloitte presents an opportunity to unleash and realize your potential amongst cutting edge leaders, and organizations shaping the future of the region, and indeed, the world beyond. At Deloitte, your whole self to work, every day. Combine that with our drive to propel with purpose and you have the perfect playground to collaborate, innovate, grow, and make an impact that matters. The Team Deloitte’s Technology & Transformation practice can help you uncover and unlock the value buried deep inside vast amounts of data. Our global network provides strategic guidance and implementation services to help companies manage data from disparate sources and convert it into accurate, actionable information that can support fact-driven decision-making and generate an insight-driven advantage. Our practice addresses the continuum of opportunities in business intelligence & visualization, data management, performance management and next-generation analytics and technologies, including big data, cloud, cognitive and machine learning. Your work profile: As a Senior Consultant/Manager in our Technology & Transformation you’ll build and nurture positive working relationships with teams and clients with the intention to exceed client expectations: - Roles and Responsibilities: The Data Engineer will work on data engineering projects for various business units, focusing on delivery of complex data management solutions by leveraging industry best practices. They work with the project team to build the most efficient data pipelines and data management solutions that make data easily available for consuming applications and analytical solutions. A Data engineer is expected to possess strong technical skills. Key Characteristics Technology champion who constantly pursues skill enhancement and has inherent curiosity to understand work from multiple dimensions. Interest and passion in Big Data technologies and appreciates the value that can be brought in with an effective data management solution. Has worked on real data challenges and handled high volume, velocity, and variety of data. Excellent analytical & problem-solving skills, willingness to take ownership and resolve technical challenges. Contributes to community building initiatives like CoE, CoP. Mandatory skills: Azure - Master ELT - Skill Data Modeling - Skill Data Integration & Ingestion - Skill Data Manipulation and Processing - Skill GITHUB, Action, Azure DevOps - Skill Data factory, Databricks, SQL DB, Synapse, Stream Analytics, Glue, Airflow, Kinesis, Redshift, SonarQube, PyTest - Skill Optional skills: Experience in project management, running a scrum team. Experience working with BPC, Planning. Exposure to working with external technical ecosystem. MKDocs documentation Location and way of working: Base location: Bengaluru This profile involves occasional travelling to client locations. Hybrid is our default way of working. Each domain has customized the hybrid approach to their unique needs. Your role as a Senior Consultant/Manager: We expect our people to embrace and live our purpose by challenging themselves to identify issues that are most important for our clients, our people, and for society. In addition to living our purpose, Senior Consultant/Manager across our organization must strive to be: Inspiring - Leading with integrity to build inclusion and motivation Committed to creating purpose - Creating a sense of vision and purpose Agile - Achieving high-quality results through collaboration and Team unity Skilled at building diverse capability - Developing diverse capabilities for the future Persuasive / Influencing - Persuading and influencing stakeholders Collaborating - Partnering to build new solutions Delivering value - Showing commercial acumen Committed to expanding business - Leveraging new business opportunities Analytical Acumen - Leveraging data to recommend impactful approach and solutions through the power of analysis and visualization Effective communication – Must be well abled to have well-structured and well-articulated conversations to achieve win-win possibilities Engagement Management / Delivery Excellence - Effectively managing engagement(s) to ensure timely and proactive execution as well as course correction for the success of engagement(s) Managing change - Responding to changing environment with resilience Managing Quality & Risk - Delivering high quality results and mitigating risks with utmost integrity and precision Strategic Thinking & Problem Solving - Applying strategic mindset to solve business issues and complex problems Tech Savvy - Leveraging ethical technology practices to deliver high impact for clients and for Deloitte Empathetic leadership and inclusivity - creating a safe and thriving environment where everyone's valued for who they are, use empathy to understand others to adapt our behaviors' and attitudes to become more inclusive. How you’ll grow Connect for impact Our exceptional team of professionals across the globe are solving some of the world’s most complex business problems, as well as directly supporting our communities, the planet, and each other. Know more in our Global Impact Report and our India Impact Report. Empower to lead You can be a leader irrespective of your career level. Our colleagues are characterized by their ability to inspire, support, and provide opportunities for people to deliver their best and grow both as professionals and human beings. Know more about Deloitte and our One Young World partnership. Inclusion for all At Deloitte, people are valued and respected for who they are and are trusted to add value to their clients, teams and communities in a way that reflects their own unique capabilities. Know more about everyday steps that you can take to be more inclusive. At Deloitte, we believe in the unique skills, attitude and potential each and every one of us brings to the table to make an impact that matters. Drive your career At Deloitte, you are encouraged to take ownership of your career. We recognize there is no one size fits all career path, and global, cross-business mobility and up / re-skilling are all within the range of possibilities to shape a unique and fulfilling career. Know more about Life at Deloitte. Everyone’s welcome… entrust your happiness to us Our workspaces and initiatives are geared towards your 360-degree happiness. This includes specific needs you may have in terms of accessibility, flexibility, safety and security, and caregiving. Here’s a glimpse of things that are in store for you. Interview tips We want job seekers exploring opportunities at Deloitte to feel prepared, confident and comfortable. To help you with your interview, we suggest that you do your research, know some background about the organization and the business area you’re applying to. Check out recruiting tips from Deloitte professionals. Show more Show less
Posted 1 week ago
2.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
About Akira AKIRA is a bootstrapped Data & Analytics Services company driven by technology excellence, consultative practices and focus on building technology agnostic customer solutions helping organizations realize business value through data. We have grown over 3 times in the past 2 years across business, headcount and number of clients we serve once we started focusing on growing up. AKIRA is a growth engine, where we provide great opportunities to all our team members to extract as much learning and satisfaction for the work they deliver. We will double ourselves in the coming 6-9 months and want to be a great place to work. Job Title : AI/ML Engineer (Data Scientist) Location : Bangalore Job Description We are an innovative technology company committed to driving business transformation through AI and Machine Learning solutions. We specialize in solving complex business challenges by leveraging cutting-edge AI/ML technologies, delivering scalable, reliable, and maintainable solutions for our clients. We are looking for a skilled AI/ML Engineer to join our dynamic team and play a crucial role in building production-grade AI/ML pipelines and solutions. Responsibilities As an AI/ML Engineer, you will be responsible for designing, developing, and deploying AI and ML models to address complex business challenges. You will collaborate with cross-functional teams, including Data Engineers and Data Scientists, to build scalable, production-grade AI/ML pipelines. Your work will ensure the deployed models are reliable, maintainable, and optimized for performance in real-world : Design, Develop, and Deploy AI/ML Models : Address complex business challenges through AI and ML solutions, ensuring production-grade deployments Build Scalable AI/ML Pipelines : Work on data preprocessing, feature engineering, model training, validation, deployment, and ongoing monitoring of AI/ML models Collaboration with Cross-functional Teams : Collaborate with Data Engineers, Data Scientists, and other stakeholders to implement end-to-end AI/ML solutions Optimize AI/ML Models in Production : Ensure that deployed models are reliable, scalable, and continuously optimized for high performance in production environments Develop APIs and Microservices : Build APIs or microservices to integrate AI/ML models into business solutions, making them usable across different platforms Implement MLOps Practices : Optimize workflows using CI/CD pipelines, automated retraining of models, version control, and monitoring Apply State-of-the-art AI/ML Techniques : Utilize deep learning, NLP, computer vision, and time-series analysis to develop innovative solutions Ensure Ethical AI Practices : Adhere to data privacy, security, and ethical AI standards throughout the entire lifecycle of AI/ML models Stay Up-to-date with Industry Trends : Continuously explore and implement the latest AI/ML techniques, tools, and best practices Assist in Documentation & Best Practices : Help create and maintain standards, workflows, and documentation for data technology and AI/ML projects Required Skills & Qualifications Strong Expertise in AI/ML Frameworks : Proficiency with TensorFlow, PyTorch, or Scikit-learn for building and deploying models Proficiency in Programming Languages : Strong command of Python or R, including libraries/tools for data science and machine learning Experience with Cloud-based AI/ML Services : Hands-on experience with Microsoft Azure and Databricks MLOps Experience : Expertise in implementing automated pipelines, model versioning, monitoring, and lifecycle management Domain-Specific AI Expertise : Deep knowledge in NLP, computer vision, predictive analytics, or other specialized AI techniques Excellent Problem-Solving Skills : Ability to identify and address technical challenges with innovative and efficient solutions Effective Communication Skills : Strong ability to convey complex technical concepts to both technical and non-technical audiences Attention to Detail : Ability to handle and execute highly specialized projects with a focus on quality Desired Skills & Qualifications Familiarity with Distributed Computing : Knowledge of Apache Spark and scalable training techniques for large datasets Experience with GenAI : Hands-on experience in developing enterprise-level Gen AI applications and models for business use cases Knowledge of Data Privacy & Security Standards : Understanding of ethical AI principles and ensuring compliance with relevant privacy and security standards Interpersonal Skills : Excellent collaboration and relationship-building skills with colleagues, stakeholders, and external partners (ref:hirist.tech) Show more Show less
Posted 1 week ago
3.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Kyndryl Data Science Bengaluru, Karnataka, India Chennai, Tamil Nadu, India Posted on Jun 12, 2025 Apply now Who We Are At Kyndryl, we design, build, manage and modernize the mission-critical technology systems that the world depends on every day. So why work at Kyndryl? We are always moving forward – always pushing ourselves to go further in our efforts to build a more equitable, inclusive world for our employees, our customers and our communities. The Role At Kyndryl, we design, build, manage and modernize the mission-critical technology systems that the world depends on every day. So why work at Kyndryl? We are always moving forward – always pushing ourselves to go further in our efforts to build a more equitable, inclusive world for our employees, our customers and our communities. As a Data Engineer , you will leverage your expertise in Databricks , big data platforms , and modern data engineering practices to develop scalable data solutions for our clients. Candidates with healthcare experience, particularly with EPIC systems , are strongly encouraged to apply. This includes creating data pipelines, integrating data from various sources, and implementing data security and privacy measures. The Data Engineer will also be responsible for monitoring and troubleshooting data flows and optimizing data storage and processing for performance and cost efficiency. Responsibilities Develop data ingestion, data processing and analytical pipelines for big data, relational databases and data warehouse solutions Design and implement data pipelines and ETL/ELT processes using Databricks, Apache Spark, and related tools. Collaborate with business stakeholders, analysts, and data scientists to deliver accessible, high-quality data solutions. Provide guidance on cloud migration strategies and data architecture patterns such as Lakehouse and Data Mesh Provide pros/cons, and migration considerations for private and public cloud architectures Provide technical expertise in troubleshooting, debugging, and resolving complex data and system issues. Create and maintain technical documentation, including system diagrams, deployment procedures, and troubleshooting guides Experience working with Data Governance, Data security and Data Privacy (Unity Catalogue or Purview) If you're ready to embrace the power of data to transform our business and embark on an epic data adventure, then join us at Kyndryl. Together, let's redefine what's possible and unleash your potential. Your Future at Kyndryl Every position at Kyndryl offers a way forward to grow your career. We have opportunities that you won’t find anywhere else, including hands-on experience, learning opportunities, and the chance to certify in all four major platforms. Whether you want to broaden your knowledge base or narrow your scope and specialize in a specific sector, you can find your opportunity here. Who You Are You’re good at what you do and possess the required experience to prove it. However, equally as important – you have a growth mindset; keen to drive your own personal and professional development. You are customer-focused – someone who prioritizes customer success in their work. And finally, you’re open and borderless – naturally inclusive in how you work with others. Required Technical And Professional Experience 3+ years of consulting or client service delivery experience on Azure Graduate/Postgraduate in computer science, computer engineering, or equivalent with minimum of 8 years of experience in the IT industry. 3+ years of experience in developing data ingestion, data processing and analytical pipelines for big data, relational databases such as SQL server and data warehouse solutions such as Azure Synapse Extensive hands-on experience implementing data ingestion, ETL and data processing. Hands-on experience in and Big Data technologies such as Java, Python, SQL, ADLS/Blob, PySpark and Spark SQL, Databricks, HD Insight and live streaming technologies such as EventHub. Experience with cloud-based database technologies (Azure PAAS DB, AWS RDS and NoSQL). Cloud migration methodologies and processes including tools like Azure Data Factory, Data Migration Service, etc. Experience with monitoring and diagnostic tools (SQL Profiler, Extended Events, etc). Expertise in data mining, data storage and Extract-Transform-Load (ETL) processes. Experience with relational databases and expertise in writing and optimizing T-SQL queries and stored procedures. Experience in using Big Data File Formats and compression techniques. Experience in Developer tools such as Azure DevOps, Visual Studio Team Server, Git, Jenkins, etc. Experience with private and public cloud architectures, pros/cons, and migration considerations. Excellent problem-solving, analytical, and critical thinking skills. Ability to manage multiple projects simultaneously, while maintaining a high level of attention to detail. Communication Skills: Must be able to communicate with both technical and nontechnical. Able to derive technical requirements with the stakeholders. Preferred Technical And Professional Experience Cloud platform certification, e.g., Microsoft Certified: (DP-700) Azure Data Engineer Associate, AWS Certified Data Analytics – Specialty, Elastic Certified Engineer, Google Cloud Professional Data Engineer Professional certification, e.g., Open Certified Technical Specialist with Data Engineering Specialization. Experience working with EPIC healthcare systems (e.g., Clarity, Caboodle). Databricks certifications (e.g., Databricks Certified Data Engineer Associate or Professional). Knowledge of GenAI tools, Microsoft Fabric, or Microsoft Copilot. Familiarity with healthcare data standards and compliance (e.g., HIPAA, GDPR). Experience with DevSecOps and CI/CD deployments Experience in NoSQL databases design Knowledge on , Gen AI fundamentals and industry supporting use cases. Hands-on experience with Delta Lake and Delta Tables within the Databricks environment for building scalable and reliable data pipelines. Being You Diversity is a whole lot more than what we look like or where we come from, it’s how we think and who we are. We welcome people of all cultures, backgrounds, and experiences. But we’re not doing it single-handily: Our Kyndryl Inclusion Networks are only one of many ways we create a workplace where all Kyndryls can find and provide support and advice. This dedication to welcoming everyone into our company means that Kyndryl gives you – and everyone next to you – the ability to bring your whole self to work, individually and collectively, and support the activation of our equitable culture. That’s the Kyndryl Way. What You Can Expect With state-of-the-art resources and Fortune 100 clients, every day is an opportunity to innovate, build new capabilities, new relationships, new processes, and new value. Kyndryl cares about your well-being and prides itself on offering benefits that give you choice, reflect the diversity of our employees and support you and your family through the moments that matter – wherever you are in your life journey. Our employee learning programs give you access to the best learning in the industry to receive certifications, including Microsoft, Google, Amazon, Skillsoft, and many more. Through our company-wide volunteering and giving platform, you can donate, start fundraisers, volunteer, and search over 2 million non-profit organizations. At Kyndryl, we invest heavily in you, we want you to succeed so that together, we will all succeed. Get Referred! If you know someone that works at Kyndryl, when asked ‘How Did You Hear About Us’ during the application process, select ‘Employee Referral’ and enter your contact's Kyndryl email address. Apply now See more open positions at Kyndryl Show more Show less
Posted 1 week ago
6.0 - 10.0 years
22 - 25 Lacs
Bengaluru
Hybrid
Mandatory Skills & Experience: 6 to 8 years of experience in data engineering, with strong experience in Oracle DWH/ODS environments. Minimum 3+ years hands-on experience in Databricks (including PySpark, SQL, Delta Lake, Workflows). Strong understanding of Lakehouse architecture, cloud data platforms, and big data processing. Proven experience in migrating data warehouse and ETL workloads from Oracle to cloud platforms. Experience with PL/SQL, query tuning, and reverse engineering legacy systems. Exposure to Pentaho and/or TIBCO Data Virtualization/Integration tools. Experience with CI/CD pipelines, version control (e.g., Git), and automated testing. Familiarity with data governance, security policies, and compliance in cloud environments. Strong communication and documentation skills. Preferred Skills (Advantage): Experience in cloud migration projects (AWS/Azure). Knowledge of Delta Lake, Unity Catalog, and Databricks workflows. Exposure to Kafka for real-time data streaming. Experience with ETL tools like Pentaho or Tibco will be an added advantage. AWS/Azure/Databricks certifications Tools & Technologies: Databricks, Oracle, Hadoop (HDFS, Hive, Sqoop), AWS (S3, EMR, Glue, Lamda, RDS) PySpark, SQL, Python, Kafka CI/CD (Jenkins, GitHub Actions), Orchestration (Airflow, Control-M) JIRA, Confluence, Git (GitHub/Bitbucket) Cloud Certifications (Preferred): Databricks Certified Data Engineer AWS Certified Solutions Architect/Developer
Posted 1 week ago
5.0 years
0 Lacs
Serilingampalli, Telangana, India
On-site
Description VP, AI and Engineering Syneos Health® is a leading fully integrated biopharmaceutical solutions organization built to accelerate customer success. We translate unique clinical, medical affairs and commercial insights into outcomes to address modern market realities. Every day we perform better because of how we work together, as one team, each the best at what we do. We bring a wide range of talented experts together across a wide range of business-critical services that support our business. Every role within Corporate is vital to furthering our vision of Shortening the Distance from Lab to Life®. Discover what our 29,000 employees, across 110 countries already know. WORK HERE MATTERS EVERYWHERE Why Syneos Health We are passionate about developing our people, through career development and progression; supportive and engaged line management; technical and therapeutic area training; peer recognition and total rewards program. We are committed to our Total Self culture – where you can authentically be yourself. Our Total Self culture is what unites us globally, and we are dedicated to taking care of our people. We are continuously building the company we all want to work for and our customers want to work with. Why? Because when we bring together diversity of thoughts, backgrounds, cultures, and perspectives – we’re able to create a place where everyone feels like they belong. Job Responsibilities Job Summary: This role is responsible for leading the AI, software, data, and quality engineering organization. Accountable for delivering best in class AI, data, and applications used by thousands of users worldwide. The engineering organization will partner with digital and technology product teams to create solutions and value for customers, solutions that help accelerate medicines and vaccines development and enable patient access. The role drives and delivers AI-infused applications at scale that will support Syneos Health’s efficient growth. The engineering organization also develops technology products that supercharge internal capabilities across corporate functions. Builds and develops the engineering organization based in India and leads a network of engineering nodes in other locations around the world. Participates in customer meetings, conferences, and technology incubators with a focus on building relationships, tracking trends, and engaging with peers in the industry. As a senior digital and technology leader in India, the role is responsible for overseeing daily operations (technology and people) for the delivery team, technology development, and strategic growth of the company’s regional offices. This leadership role focuses on driving the implementation of global technology initiatives, ensuring operational alignment with global standards, and fostering a high-performance culture within the team. Plays a key role in talent management, project delivery, stakeholder communication, and driving innovation to support the company's global goals. Core Responsibilities Develop Best in Class and Cost Engineering Organization Attract, Develop, and Retain engineering talent across all disciplines including AI, software, data analytics, quality, testing, and agile facilitation. Manage and scale a team of technology professionals, ensuring the right mix of talent to meet business demands. Continuously upskill organization on new technologies in alignment with enterprise technology decisions. Manage strategic 3rd parties to access engineering talent and source capacity when internal capabilities are fully utilized. Assess maturity of organization, set path to implement best practices and standards for engineering disciplines, and lead communities of practice. Oversee and manage a High-Performing Technology Delivery Partner with digital and tech product leaders to understand priorities, manage demand, provide work estimates, and maintain product roadmaps. Staff engineering resources on product and project teams to deliver prioritized initiatives, ensuring utilization of organization. Deliver coding, configuration, and testing in product-centric and agile ways and measure performance quarterly across value, flow, and quality metrics. Where needed, staff and deliver projects. Drive Devops, Dataops, and MLops platforms and engineering productivity, AI automation, automated code and test, in partnership with Core Technology. Regional Tech Leadership: Lead and manage the day-to-day operations of the site-based team, ensuring alignment with the global strategic objectives. Provide site leadership across technology projects end to end, including software development, product delivery, infrastructure management, and IT services. Monitor industry trends, emerging technologies, and best practices to ensure the site remains competitive and innovative. Foster a culture of collaboration, innovation, and continuous improvement within the site. Build, mentor, and inspire a high-performing team, ensuring the growth and development of employees. Drive employee engagement and retention initiatives to ensure a motivated and committed workforce. Partner with HR and Talent Acquisition in support of these initiatives for an engaged and sought after employee experience. Stakeholder Communication: Maintain strong relationships with key stakeholders in the CDIO LT, including senior leadership, product, and engineering teams. Provide regular updates on performance, delivery progress, risks, and opportunities to CDIO executives. Act as a cultural ambassador, ensuring that the team’s work aligns with the company’s global vision and values. Risk Management and Compliance: Ensure the organization complies with relevant legal, regulatory, and company policies. Identify risks related to technology, operations, and talent management, and implement mitigation strategies. Innovation and Continuous Improvement: Promote and drive innovation within the team, encouraging the use of new technologies and approaches. Continuously assess and improve site processes to enhance efficiency, reduce costs, and drive value. Qualifications Experience in technology or operations leadership roles, with experience managing a tech team in a region or a similar market. Experience leading a pharma services technology organization (CRO, professional services, biotech/biopharma, and healthcare technology) focused on life sciences. Proven track record in leading cross-functional teams and delivering complex end to end technology projects at a global scale. Experience leveraging data, analytics, and AI to develop new products and services. Ability to transform legacy technology and digital teams into a highly efficient, disciplined, delivery-oriented organization with strong alignment to business strategy. Experience managing both technical and operational aspects of a global business, particularly with teams in different geographic locations. P&L experience is a plus. Proven experience to lead a high-performing team as well as attracting talent for a continuous cycle of diversity of thought tied to employee growth and business objectives being met. Experience leading a technology organization providing both product development and SaaS Software solutions for a broad range of technologies such as Python, Java, Apex, Databricks, Workday, Oracle Fusion, ServiceNow, SalesForce, Veeva CRM, Veeva Vault Clinical, as well as cloud and analytical services provided by Microsoft Azure, AWS, Oracle OCI is preferred. Strong leadership and team-building abilities. Excellent communication and interpersonal skills, with the ability to effectively interact with senior management, technical teams, and global stakeholders. Deep understanding of modern software, AI, and data development methodologies, including Agile methodologies, devops, dataops, and MLops. Proficiency in technology management, project delivery, and risk mitigation. Strong business acumen, including the ability to manage budgets, resources, and operational performance. Strong problem-solving skills and a proactive approach to resolving challenges. Ability to work in a fast-paced, dynamic environment. Experience in a global or multi-site organization is highly desirable. Get to know Syneos Health Over the past 5 years, we have worked with 94% of all Novel FDA Approved Drugs, 95% of EMA Authorized Products and over 200 Studies across 73,000 Sites and 675,000+ Trial patients. No matter what your role is, you’ll take the initiative and challenge the status quo with us in a highly competitive and ever-changing environment. Learn more about Syneos Health. http://www.syneoshealth.com Additional Information Tasks, duties, and responsibilities as listed in this job description are not exhaustive. The Company, at its sole discretion and with no prior notice, may assign other tasks, duties, and job responsibilities. Equivalent experience, skills, and/or education will also be considered so qualifications of incumbents may differ from those listed in the Job Description. The Company, at its sole discretion, will determine what constitutes as equivalent to the qualifications described above. Further, nothing contained herein should be construed to create an employment contract. Occasionally, required skills/experiences for jobs are expressed in brief terms. Any language contained herein is intended to fully comply with all obligations imposed by the legislation of each country in which it operates, including the implementation of the EU Equality Directive, in relation to the recruitment and employment of its employees. The Company is committed to compliance with the Americans with Disabilities Act, including the provision of reasonable accommodations, when appropriate, to assist employees or applicants to perform the essential functions of the job. Show more Show less
Posted 1 week ago
5.0 years
0 Lacs
Pune, Maharashtra, India
On-site
Job Description Analytics Engineer We are seeking a talented, motivated and self-driven professional to join the HH Digital, Data & Analytics (HHDDA) organization and play an active role in Human Health transformation journey to become the premier “Data First” commercial biopharma organization. As a Analytics Engineer, you will be part of the HHDDA Commercial Data Solutions team, providing technical/data expertise development of analytical data products to enable data science & analytics use cases. In this role, you will create and maintain data assets/domains used in the commercial/marketing analytics space – to develop best-in-class data pipelines and products, working closely with data product owners to translate data product requirements and user stories into development activities throughout all phases of design, planning, execution, testing, deployment and delivery. Your specific responsibilities will include Hands-on development of last-mile data products using the most up-to-date technologies and software / data / DevOps engineering practices Enable data science & analytics teams to drive data modeling and feature engineering activities aligned with business questions and utilizing datasets in an optimal way Develop deep domain expertise and business acumen to ensure that all specificalities and pitfalls of data sources are accounted for Build data products based on automated data models, aligned with use case requirements, and advise data scientists, analysts and visualization developers on how to use these data models Develop analytical data products for reusability, governance and compliance by design Align with organization strategy and implement semantic layer for analytics data products Support data stewards and other engineers in maintaining data catalogs, data quality measures and governance frameworks Education B.Tech / B.S., M.Tech / M.S. or PhD in Engineering, Computer Science, Engineering, Pharmaceuticals, Healthcare, Data Science, Business, or related field Required Experience 5+ years of relevant work experience in the pharmaceutical/life sciences industry, with demonstrated hands-on experience in analyzing, modeling and extracting insights from commercial/marketing analytics datasets (specifically, real-world datasets) High proficiency in SQL, Python and AWS Good understanding and comprehension of the requirements provided by Data Product Owner and Lead Analytics Engineer Experience creating / adopting data models to meet requirements from Marketing, Data Science, Visualization stakeholders Experience with including feature engineering Experience with cloud-based (AWS / GCP / Azure) data management platforms and typical storage/compute services (Databricks, Snowflake, Redshift, etc.) Experience with modern data stack tools such as Matillion, Starburst, ThoughtSpot and low-code tools (e.g. Dataiku) Excellent interpersonal and communication skills, with the ability to quickly establish productive working relationships with a variety of stakeholders Experience in analytics use cases of pharmaceutical products and vaccines Experience in market analytics and related use cases Preferred Experience Experience in analytics use cases focused on informing marketing strategies and commercial execution of pharmaceutical products and vaccines Experience with Agile ways of working, leading or working as part of scrum teams Certifications in AWS and/or modern data technologies Knowledge of the commercial/marketing analytics data landscape and key data sources/vendors Experience in building data models for data science and visualization/reporting products, in collaboration with data scientists, report developers and business stakeholders Experience with data visualization technologies (e.g, PowerBI) Our Human Health Division maintains a “patient first, profits later” ideology. The organization is comprised of sales, marketing, market access, digital analytics and commercial professionals who are passionate about their role in bringing our medicines to our customers worldwide. We are proud to be a company that embraces the value of bringing diverse, talented, and committed people together. The fastest way to breakthrough innovation is when diverse ideas come together in an inclusive environment. We encourage our colleagues to respectfully challenge one another’s thinking and approach problems collectively. We are an equal opportunity employer, committed to fostering an inclusive and diverse workplace. Current Employees apply HERE Current Contingent Workers apply HERE Search Firm Representatives Please Read Carefully Merck & Co., Inc., Rahway, NJ, USA, also known as Merck Sharp & Dohme LLC, Rahway, NJ, USA, does not accept unsolicited assistance from search firms for employment opportunities. All CVs / resumes submitted by search firms to any employee at our company without a valid written search agreement in place for this position will be deemed the sole property of our company. No fee will be paid in the event a candidate is hired by our company as a result of an agency referral where no pre-existing agreement is in place. Where agency agreements are in place, introductions are position specific. Please, no phone calls or emails. Employee Status Regular Relocation VISA Sponsorship Travel Requirements Flexible Work Arrangements Hybrid Shift Valid Driving License Hazardous Material(s) Required Skills Business Intelligence (BI), Data Management, Data Modeling, Data Visualization, Measurement Analysis, Stakeholder Relationship Management, Waterfall Model Preferred Skills Job Posting End Date 07/31/2025 A job posting is effective until 11 59 59PM on the day BEFORE the listed job posting end date. Please ensure you apply to a job posting no later than the day BEFORE the job posting end date. Requisition ID R335382 Show more Show less
Posted 1 week ago
8.0 years
0 Lacs
Karnataka, India
On-site
Who We Are Looking For We are seeking a highly motivated Product Manager to lead the strategy, development, and execution of data-driven products at Nike. The ideal candidate will possess a strong product management skills with understanding of data product lifecycle. These skills are crucial for developing scalable and high-impact data solutions that enhance Nike's business and support data-driven decision-making for stakeholders. You will serve as a Product advocate, effectively describing the Product, its value, operation, adoption process, and upcoming changes, while tailoring information to different audiences. What You Will Work On Define, own and maintain the vision, roadmap, and strategy for one of critical data products in the retail supply chain domain. Own, maintain and prioritize the product roadmap - break Features into stories, ensure capacity for enablers and defects. Oversee and manage lifecycle of data products, including data ingestion, cross-domain integrations, transformations, and consumption layers. Analyze costs and benefits of new systems or system integrations Ensure data quality, security, and compliance within all product deliverables. Act as the liaison between business stakeholders, engineering teams, data analysts, and supply chain SMEs. Accountable for defining the sprint and quarterly goals clearly, ensuring work is accepted and delivered following the guidelines and milestones set forth in the Product Roadmap. Facilitate agile ceremonies, prioritize backlogs in alignment with sprint goals, multi-year objectives, quarterly goals, and roadmaps. Communicate product progress, risks, and KPIs to Senior Management and stakeholders. Advocate the importance of data products and encourage their adoption across various business units. Identify, escalate, and remove roadblocks, triage defects, and seek clarity from consumers on issues where appropriate. Work with the Team to translate any internal "Enablers" into value driven stories. Facilitate process mapping, feature mapping with Personas, gap analysis, and data flow to assess the business implications of features, tools, or platforms. Prioritize and make trade-offs based on an understanding of strategies, budget, epics, enabler requirements, and goals to maximize business value. Key Competencies Product strategy, roadmap and requirement planning: Work with Global and Geo based end users and Product Leaders to set product north star, providing input for Data Products specific requirements. Ability to take desired business outcomes and customer/user research data, converting them into functional and non-functional requirements Cross-functional expertise: Ability to interact and influence stakeholders across functions and levels E2E Product Management: ability to oversee E2E product lifecycles for a product, from strategy (innovate, generate ideas, justify investments) to launch (execution, integration, sequencing), operating in-life (track performance, fix issues), managing end-of-life and measurement including integration, sequencing, and stitching across all levels. Technology solutioning: ability to oversee technical solutioning in partnership with engineering Functional depth: Understanding with enterprise domains like retail supply chain, marketplace or merchandising What You Bring We're looking for someone who has a proven track record of being able to navigate through complex product landscape and having delivered successful data products. To that effect, we are looking for someone with these skills: Master’s Degree in Business Administration, Information Systems, or another relevant business/technology field. Bachelor’s Degree in Computer Science or Information Technology is preferred, but not a must. 8+ years of professional experience in product management, with hands on proficiency with data analytics, or data engineering. Proven track record of managing enterprise-scale data products or platforms Strong understanding of data architecture, data modeling, governance, and metadata Experience with tools such as Snowflake, Databricks, Tableau/Power BI, or similar Exceptional communication, stakeholder management, and storytelling skills. Experience working in Agile or Scaled Agile environments Show more Show less
Posted 1 week ago
3.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Dreaming big is in our DNA. It’s who we are as a company. It’s our culture. It’s our heritage. And more than ever, it’s our future. A future where we’re always looking forward. Always serving up new ways to meet life’s moments. A future where we keep dreaming bigger. We look for people with passion, talent, and curiosity, and provide them with the teammates, resources and opportunities to unleash their full potential. The power we create together – when we combine your strengths with ours – is unstoppable. Are you ready to join a team that dreams as big as you do? AB InBev GCC was incorporated in 2014 as a strategic partner for Anheuser-Busch InBev. The center leverages the power of data and analytics to drive growth for critical business functions such as operations, finance, people, and technology. The teams are transforming Operations through Tech and Analytics. Do You Dream Big? We Need You. Job Description Job Title: Data Scientist Location: Bangalore Reporting to: Senior Manager Analytics Purpose of the role Anheuser-Busch InBev (AB InBev)ʼs Logistics Analytics team is responsible for transforming the company’s Supply Chain by embedding intelligence across key process areas in Inventory Planning, Transportation & Warehousing. Key tasks & accountabilities Collaborate with product owners from business functions to translate business problem into Data Science use case. Expected to explore and develop ML/AI algorithms to solve new business problem or improve the existing methodology, model accuracy etc. Work on building code that will deploy into production, using code design and style standards. Document your thought process and create artefacts on team repo/wiki that can be used to share with business and engineering for sign-off. Significantly improve the performance and reliability of our code that create high quality and reproducible results. Collaborate with other team members to advance teamʼs ability to ship high quality code, fast! Maintain basic developer hygiene that includes but not limited to, writing tests, using loggers, readme to name a few. 3. Qualifications, Experience, Skills Level Of Educational Attainment Required Academic degree in, but not limited to, Bachelors or Masters in engineering (CS) B.Tech/BE/ Masters in data science, statistics, applied mathematics, mathematics, economics, etc Previous Work Experience Minimum 3- 6+ years of relevant experience in Analytics & Data science / building ML models Preferred industry exposure – CPG or Supply Chain domain & capability of successfully deploying analytics solutions and products for internal or external clients Technical Skills Required Hands-on experience in data manipulation using Excel, Python, SQL Expert level proficiency in Python (knowledge of object-oriented design concepts & able to write end-to-end ML or data pipelines in python) Proficient in application of one or more and has exposure to others - ML concepts (like regression, classification, clustering, time series forecasting) and optimization techniques(Linear & Non Linear optimization) to solve end-to-end business problem. Familiarity with Azure Tech Stack, Databricks, ML Flow in any cloud platform Other Skills Required Passion for solving problems using data Detail oriented, analytical and inquisitive. Ability to learn on the go Ability to effectively communicate and present information at various levels of an organization Ability to work independently and with others And above all of this, an undying love for beer! We dream big to create future with more cheers! Show more Show less
Posted 1 week ago
5.0 years
0 Lacs
Karnataka, India
On-site
Who You’ll Work With The Senior Data Analyst will work with the Data and Artificial Intelligence team at Nike. Data and Artificial Intelligence team at Nike drives the enterprise-wide data needs that fuels Nike's innovation. This role is crucial in translating the business needs of Nike into data requirements and thereby have a significant impact on the growth of Nike's business. This role will fuel the foundational data layers that'll power the advanced data analytics of Nike. Who We Are Looking For We are looking for individuals who are highly driven and have the ability to understand and translate business requirements into data needs. The candidates should be good at problem solving and have in-depth technical knowledge on SQL, bigdata with optional expertise in pyspark. They need to have excellent verbal and written communication and should be willing to work with business consumers to understand their needs and requirements. Role requirements include A minimum of bachelor’s degree in computer science/information science engineering 5+ years of experience in data and analytics space with hands-on experience Very high expertise in SQL with the ability to work on platforms like databricks, hive and snowflake Ability to integrate and communicate moderately complex information, sometimes to audiences who are not familiar with the subject matter. Acts as a resource to teammates. Ability to integrate complex datasets and derive business value out of data Independently utilizes knowledge, skills, and abilities to identify areas of opportunity, resolve complex problems & navigate solutions. What You’ll Work On In this role you'll be working with a team of talented data engineers, product managers and data consumers who'll focus on the enterprise-wide data needs of Nike. You'll have a direct impact on the deliverables of the team, and you'll be guiding the team on solving complex business problems. Some of your day-to-day activities will include - Collaborating with engineers, product managers and business users for optimal usage of data Understanding business used cases using data Analysing data to inform business decisions Troubleshooting complex data integration problems at a business level Writing and enhancing complex queries in databricks, hive, snowflake Providing inputs to the product management in growing the data foundational layers Show more Show less
Posted 1 week ago
3.0 years
0 Lacs
Gurugram, Haryana, India
On-site
Location(s): Tower -11, (IT/ITES) SEZ of M/s Gurugram Infospace Ltd, Vill. Dundahera, Sector-21, Gurugram, Haryana, Gurugram, Haryana, 122016, IN Line Of Business: Data Estate(DE) Job Category Engineering & Technology Experience Level: Experienced Hire At Moody's, we unite the brightest minds to turn today’s risks into tomorrow’s opportunities. We do this by striving to create an inclusive environment where everyone feels welcome to be who they are-with the freedom to exchange ideas, think innovatively, and listen to each other and customers in meaningful ways. If you are excited about this opportunity but do not meet every single requirement, please apply! You still may be a great fit for this role or other open roles. We are seeking candidates who model our values: invest in every relationship, lead with curiosity, champion diverse perspectives, turn inputs into actions, and uphold trust through integrity. Job Summary The Data Specialist will explore and transform an existing data remediation environment to ensure the smooth execution and automation of data validation, reporting, and analysis tasks. The ideal candidate will have strong technical skills in Excel, SQL, and Python with proficiency in using Microsoft Office tools for reporting, and familiarity with data visualization tools like Power BI or Tableau. Excellent communication and leadership skills are essential to foster a collaborative and productive team environment. Responsibilities may include leading small or large teams of Full time Employees or contractors to focus on remediating data at scale. Key Responsibilities Team Management: Work with strategic teams of 5-10 or more data analysts and specialists, as needed for specific initiatives Provide guidance, mentorship, and support to team members to achieve individual and team goals. Data Validation and Analysis: Oversee data validation processes to ensure accuracy and completeness of data. Utilize Excel, SQL, and Python for data manipulation, analysis, and validation tasks. Implement best practices for data quality and integrity. Quality Assurance (QA) Establish and maintain QA processes to ensure the accuracy and reliability of data outputs. Conduct regular audits and reviews of data processes to identify and rectify errors. Develop and enforce data governance policies and procedures. Reporting And Presentation Create and maintain comprehensive reports using Microsoft PowerPoint, Word, and other tools. Develop insightful data visualizations and dashboards using Power BI and Tableau. Present data findings and insights to stakeholders in a clear and concise manner. Collaboration and Communication: Collaborate with cross-functional teams to understand data needs and deliver solutions. Communicate effectively with team members, stakeholders, and clients. Facilitate team meetings and discussions to ensure alignment and progress on projects. Continuous Improvement Identify opportunities for process improvements and implement changes to enhance efficiency. Stay updated with industry trends and advancements in data management and reporting tools. Foster a culture of continuous learning and development within the team. Qualifications Bachelor's degree in Economics, Statistics, Computer Science, Information Technology or other related fields. 3+ Years of relevant experience in similar field. Strong proficiency in Excel, SQL, and Python for data analysis and validation. Advanced skills in Microsoft PowerPoint, Word, and other reporting tools. Familiarity with Power BI and Tableau for data visualization. Experience with Databricks Excellent communication, leadership, and interpersonal skills. Strong problem-solving abilities and attention to detail. Ability to work independently and manage multiple priorities in a fast-paced environment. Moody’s is an equal opportunity employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, national origin, disability, protected veteran status, sexual orientation, gender expression, gender identity or any other characteristic protected by law. Candidates for Moody's Corporation may be asked to disclose securities holdings pursuant to Moody’s Policy for Securities Trading and the requirements of the position. Employment is contingent upon compliance with the Policy, including remediation of positions in those holdings as necessary. For more information on the Securities Trading Program, please refer to the STP Quick Reference guide on ComplianceNet Please note: STP categories are assigned by the hiring teams and are subject to change over the course of an employee’s tenure with Moody’s. Show more Show less
Posted 1 week ago
3.0 - 5.0 years
0 Lacs
Pune, Maharashtra, India
Remote
Description Please note even though the GPP mentions Remote, this is a Hybrid role. Key Responsibilities Implement and automate deployment of distributed systems for ingesting and transforming data from various sources (relational, event-based, unstructured). Continuously monitor and troubleshoot data quality and integrity issues. Implement data governance processes and methods for managing metadata, access, and retention for internal and external users. Develop reliable, efficient, scalable, and quality data pipelines with monitoring and alert mechanisms using ETL/ELT tools or scripting languages. Develop physical data models and implement data storage architectures as per design guidelines. Analyze complex data elements and systems, data flow, dependencies, and relationships to contribute to conceptual, physical, and logical data models. Participate in testing and troubleshooting of data pipelines. Develop and operate large-scale data storage and processing solutions using distributed and cloud-based platforms (e.g., Data Lakes, Hadoop, Hbase, Cassandra, MongoDB, Accumulo, DynamoDB). Use agile development technologies, such as DevOps, Scrum, Kanban, and continuous improvement cycles, for data-driven applications. Responsibilities Qualifications: College, university, or equivalent degree in a relevant technical discipline, or relevant equivalent experience required. This position may require licensing for compliance with export controls or sanctions regulations. Competencies System Requirements Engineering: Translate stakeholder needs into verifiable requirements and establish acceptance criteria. Collaborates: Build partnerships and work collaboratively with others to meet shared objectives. Communicates Effectively: Develop and deliver multi-mode communications that convey a clear understanding of the unique needs of different audiences. Customer Focus: Build strong customer relationships and deliver customer-centric solutions. Decision Quality: Make good and timely decisions that keep the organization moving forward. Data Extraction: Perform ETL activities from various sources and transform them for consumption by downstream applications and users. Programming: Create, write, and test computer code, test scripts, and build scripts using industry standards and tools. Quality Assurance Metrics: Apply measurement science to assess whether a solution meets its intended outcomes. Solution Documentation: Document information and solutions based on knowledge gained during product development activities. Solution Validation Testing: Validate configuration item changes or solutions using best practices. Data Quality: Identify, understand, and correct flaws in data to support effective information governance. Problem Solving: Solve problems using systematic analysis processes and industry-standard methodologies. Values Differences: Recognize the value that different perspectives and cultures bring to an organization. Qualifications Skills and Experience Needed: Must-Have: 3-5 years of experience in data engineering with a strong background in Azure Databricks and Scala/Python. Hands-on experience with Spark (Scala/PySpark) and SQL. Experience with SPARK Streaming, SPARK Internals, and Query Optimization. Proficiency in Azure Cloud Services. Agile Development experience. Unit Testing of ETL. Experience creating ETL pipelines with ML model integration. Knowledge of Big Data storage strategies (optimization and performance). Critical problem-solving skills. Basic understanding of Data Models (SQL/NoSQL) including Delta Lake or Lakehouse. Quick learner. Nice-to-Have: Understanding of the ML lifecycle. Exposure to Big Data open source technologies. Experience with SPARK, Scala/Java, Map-Reduce, Hive, Hbase, and Kafka. SQL query language proficiency. Experience with clustered compute cloud-based implementations. Familiarity with developing applications requiring large file movement for a cloud-based environment. Exposure to Agile software development. Experience building analytical solutions. Exposure to IoT technology. Work Schedule: Most of the work will be with stakeholders in the US, with an overlap of 2-3 hours during EST hours on a need basis. Job Systems/Information Technology Organization Cummins Inc. Role Category Remote Job Type Exempt - Experienced ReqID 2409179 Relocation Package Yes Show more Show less
Posted 1 week ago
0 years
0 Lacs
Noida, Uttar Pradesh, India
On-site
About the Open Position Join us as Cloud Engineer at Dailoqa , where you will be responsible for operationalizing cutting-edge machine learning and generative AI solutions, ensuring scalable, secure, and efficient deployment across infrastructure. You will work closely with data scientists, ML engineers, and business stakeholders to build and maintain robust MLOps pipelines, enabling rapid experimentation and reliable production implementation of AI models, including LLMs and real-time analytics systems. To be successful as Cloud Engineer you should have experience with: Cloud sourcing, networks, VMs, performance, scaling, availability, storage, security, access management Deep expertise in one or more cloud platforms: AWS, Azure, GCP Strong experience in containerization and orchestration (Docker, Kubernetes, Helm) Familiarity with CI/CD tools: GitHub Actions, Jenkins, Azure DevOps, ArgoCD, etc. Proficiency in scripting languages (Python, Bash, PowerShell) Knowledge of MLOps tools such as MLflow, Kubeflow, SageMaker, Vertex AI, or Azure ML Strong understanding of DevOps principles applied to ML workflows. Key Responsibilities may include: · Design and implement scalable, cost-optimized, and secure infrastructure for AI-driven platforms. · Implement infrastructure as code using tools like Terraform, ARM, or Cloud Formation. · Automate infrastructure provisioning, CI/CD pipelines, and model deployment workflows. · Ensure version control, repeatability, and compliance across all infrastructure components. · Set up monitoring, logging, and alerting frameworks using tools like Prometheus, Grafana, ELK, or Azure Monitor. · Optimize performance and resource utilization of AI workloads including GPU-based training/inference Experience with Snowflake, Databricks for collaborative ML development and scalable data processing. Understanding model interpretability, responsible AI, and governance. Contributions to open-source MLOps tools or communities. Strong leadership, communication, and cross-functional collaboration skills. Knowledge of data privacy, model governance, and regulatory compliance in AI systems. Exposure to LangChain, Vector DBs (e. g. , FAISS, Pinecone), and retrieval-augmented generation (RAG) pipelines. Show more Show less
Posted 1 week ago
5.0 - 8.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Organizations everywhere struggle under the crushing costs and complexities of “solutions” that promise to simplify their lives. To create a better experience for their customers and employees. To help them grow. Software is a choice that can make or break a business. Create better or worse experiences. Propel or throttle growth. Business software has become a blocker instead of ways to get work done. There’s another option. Freshworks. With a fresh vision for how the world works. At Freshworks, we build uncomplicated service software that delivers exceptional customer and employee experiences. Our enterprise-grade solutions are powerful, yet easy to use, and quick to deliver results. Our people-first approach to AI eliminates friction, making employees more effective and organizations more productive. Over 72,000 companies, including Bridgestone, New Balance, Nucor, S&P Global, and Sony Music, trust Freshworks’ customer experience (CX) and employee experience (EX) software to fuel customer loyalty and service efficiency. And, over 4,500 Freshworks employees make this possible, all around the world. Fresh vision. Real impact. Come build it with us. Job Description As an MDM Technology Analyst (IC3), you will be responsible for supporting the implementation, maintenance, and enhancement of Master Data Management (MDM) solutions utilizing tools such as Reltio, Reltio Integration Hub (RIH), Informatica MDM, and SQL/Python scripting. This role will also involve working with MDM workflows, D&B/ZoomInfo/Salesforce connectors, and ensuring smooth integration across systems. You will collaborate with cross-functional teams to ensure data integrity, quality, and accessibility across the organization. Key Responsibilities: MDM Implementation & Configuration: Implement and configure MDM solutions using Reltio while ensuring alignment with business requirements and best practices. Develop and maintain data models, workflows, and business rules within the MDM platform. Work on Reltio Workflow (DCR Workflow & Custom Workflow) to manage data approvals and role-based assignments. Data Integration & Transformation: Support data integration efforts using Reltio Integration Hub (RIH) to facilitate data movement across multiple systems. Develop ETL pipelines using SQL, Python, and integration tools to extract, transform, and load (ETL) data. Work with D&B, ZoomInfo, and Salesforce connectors for data enrichment and integration. Data Quality & Governance: Perform data analysis and profiling to identify data quality issues and recommend solutions for data cleansing and enrichment. Collaborate with stakeholders to define and document data governance policies, procedures, and standards. Optimize MDM workflows to enhance data stewardship and governance. Stakeholder Collaboration & Support: Work closely with business teams, data stewards, and IT teams to understand and resolve MDM-related issues. Conduct user training and provide ongoing support to ensure successful adoption of MDM tools and processes. Stay updated on industry trends and emerging technologies in MDM, data integration, and data governance. Qualifications Education: B.E / B. Tech in Computer Science, Information Technology, or a related field. Experience: 5-8 years of experience working with Reltio MDM in a professional setting. Technical Skills: Strong proficiency in SQL for data manipulation and querying. Knowledge of Python scripting for data processing and automation. Experience in Reltio Integration Hub (RIH) and handling API-based integrations. Familiarity with Data Modelling Matching, Survivorship concepts and methodologies. Experience with D&B, ZoomInfo, and Salesforce connectors for data enrichment. Understanding of MDM workflow configurations and role-based data governance Experience with AWS Databricks, Data Lake and Warehouse Soft Skills: Excellent analytical and problem-solving skills with a keen attention to detail. Strong ability to communicate effectively with both technical and non-technical stakeholders. Proven ability to work independently and collaborate in a fast-paced environment. Additional Information At Freshworks, we are creating a global workplace that enables everyone to find their true potential, purpose, and passion irrespective of their background, gender, race, sexual orientation, religion and ethnicity. We are committed to providing equal opportunity for all and believe that diversity in the workplace creates a more vibrant, richer work environment that advances the goals of our employees, communities and the business. Show more Show less
Posted 1 week ago
360.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
About Us: MUFG Bank, Ltd. is Japan’s premier bank, with a global network spanning in more than 40 markets. Outside of Japan, the bank offers an extensive scope of commercial and investment banking products and services to businesses, governments, and individuals worldwide. MUFG Bank’s parent, Mitsubishi UFJ Financial Group, Inc. (MUFG) is one of the world’s leading financial groups. Headquartered in Tokyo and with over 360 years of history, the Group has about 120,000 employees and offers services including commercial banking, trust banking, securities, credit cards, consumer finance, asset management, and leasing. The Group aims to be the world’s most trusted financial group through close collaboration among our operating companies and flexibly respond to all the financial needs of our customers, serving society, and fostering shared and sustainable growth for a better world. MUFG’s shares trade on the Tokyo, Nagoya, and New York stock exchanges. MUFG Global Service Private Limited: Established in 2020, MUFG Global Service Private Limited (MGS) is 100% subsidiary of MUFG having offices in Bengaluru and Mumbai. MGS India has been set up as a Global Capability Centre / Centre of Excellence to provide support services across various functions such as IT, KYC/ AML, Credit, Operations etc. to MUFG Bank offices globally. MGS India has plans to significantly ramp-up its growth over the next 18-24 months while servicing MUFG’s global network across Americas, EMEA and Asia Pacific. About the Role: Position Title: GFCD Data Analytics & Transaction Monitoring Tuning/Optimization, VP Corporate Title: Vice President Reporting to: Director - Global Transaction Monitoring Location: Bangalore Job Profile Position details: Purpose of Role: We are seeking a highly skilled and data-driven Senior Financial Crime Analytics Team Lead to join our Global Financial Crimes Division (GFCD) team. In this role, you will lead a team responsible for primarily using Actimize and Databricks to conduct advanced analytics to enhance our transaction monitoring (TM) and customer risk rating (CRR) capabilities. You will work at the intersection of data science, compliance, and technology to advise, build, and configure scalable solutions that protect the organization from financial crime. Main Responsibilities: Responsible for strategizing the team’s initiatives and determining measurable outcomes with GFCD management that align the MGS team’s goals with broader GFCD global program goals on an annual basis. Manage the team’s progress against project/initiative timelines and goals agreed upon with GFCD management. Coordinate with global stakeholders to conduct analytics supporting all regional financial crimes offices (Americas, EMEA, APAC, Japan) Oversee the design and implementation of TM and CRR tuning methodologies, including what-if scenario analysis, threshold optimization, and ATL/BTL sampling. Lead the end-user team using the full suite of Databricks capabilities to support GFCD’s goals related to analytics, tuning and optimization. Supervise exploratory data analysis (EDA) and communicate insights to stakeholders to support decision-making. Oversee and guide the team to analyze complex datasets, identify new methods to detect anomalies, and assist with the development of and the execution of a strategy to apply machine learning techniques for financial crime detection. Guide the development of sustainable data pipelines and robust ETL processes using Python, R, Scala, and SQL. Build and maintain utilities that support TM optimization. Ensure compliance with technical standards, data integrity, and security policies. Collaborate with centralized reporting, data governance, and operational teams to ensure alignment and efficiency. Skills and knowledge: Transaction Monitoring (Actimize): Experienced with Actimize for monitoring and analyzing transactions to identify suspicious activities and red flags indicative of money laundering, terrorism financing, and other financial crimes. Strong Technical Skills: Expertise in Python, Scala, and SQL, with familiarity with rules-based and machine learning models and model governance; ideally those relevant to transaction monitoring and sanctions screening. Proficiency in Databricks and Apache Spark: Skilled in developing scalable data pipelines and performing complex data analysis using Databricks and Apache Spark, with experience in Delta Lake for efficient data storage and real-time data streaming applications. Relevant Certifications: Databricks Certified Data Analyst Associate, Databricks Certified Machine Learning Associate, Databricks Certified Data Engineer Associate. Experience with transaction monitoring, sanctions screening, and financial crimes data sources. Excellent communication and presentation skills, with the ability to convey complex data insights to non-technical stakeholders. Job Requirements: Additional skills: Experience interfacing with banking regulators and enforcement staff Thorough understanding of an effective financial crimes risk management framework Demonstrated ability to manage multiple projects simultaneously The ability to interact effectively at all levels of the organization, including Bank staff, management, directors and prudential regulators Ability to work autonomously and initiate and prioritize own work Ability to work with teams of project managers Solid judgment, strong negotiating skills, and a practical approach to implementation – including knowledge of Bank systems Ability to balance regulatory requirements with the best interests of the Bank and its customers Ability to prepare analytical reports and visual representation of information. Ability to apply mathematical principles or statistical approaches where needed to solve problems. Education & professional qualifications: Bachelor’s degree in computer science, Information Systems, Information Technology, or related field. Experience: 15+ years of experience in financial crimes data analytics within the financial services industry. Equal Opportunity Employer: The MUFG Group is committed to providing equal employment opportunities to all applicants and employees and does not discriminate on the basis of race, colour, national origin, physical appearance, religion, gender expression, gender identity, sex, age, ancestry, marital status, disability, medical condition, sexual orientation, genetic information, or any other protected status of an individual or that individual's associates or relatives, or any other classification protected by the applicable laws. Show more Show less
Posted 1 week ago
8.0 - 11.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
About Evernorth Evernorth℠ exists to elevate health for all, because we believe health is the starting point for human potential and progress. As champions for affordable, predictable and simple health care, we solve the problems others don’t, won’t or can’t. Our innovation hub in India will allow us to work with the right talent, expand our global footprint, improve our competitive stance, and better deliver on our promises to stakeholders. We are passionate about making healthcare better by delivering world-class solutions that make a real difference. We are always looking upward. And that starts with finding the right talent to help us get there. Data & Analytics Associate Manager Position Summary The Data & Analytics Associate Manager is responsible for helping support the Enterprise Data Strategy team in the identification of key data elements across the business, identification of sources, enabling the connection across sources and ultimately supporting the consumption model of this data. This individual will work with users, technology, accounting and finance to develop requirements and support delivery of consumable data for business insights. Job Description & Responsibilities The Data & Analytics Associate Manager works with the other team members to support the development and maintenance of the Enterprise Data foundational structure and end use consumption model. Key stakeholder partners will be Finance, Accounting, Technical Teams and Automation and AI teams. Support the requirements gathering of data elements used by business areas and mapping of elements to sources Support the identification of connection points of disparate data sets to support joins for an integrated data solution Data collection and preparation inclusive of scrubbing, development of quality rules and processes for addressing gaps in data. Analysis & interpretation of data to support identification of trends and quality issues. Reporting and Visualization in the development of dashboards, reports, and other mechanisms to support consumption Leverage technologies inclusive of data virtualizer and other tools to support obtaining, mapping, transforming, storing and packaging data to make it useful for end consumers Support the development of prototype solutions to explore the application of technology leveraging data – e.g. application of AI, automation initiatives, etc. Continuous improvement in the identification of opportunities for process improvement or process enhancements in day to day execution Competencies / Skills Ability to review deliverables for completeness, quality, and compliance with established project standards. Ability to resolve conflict (striving for win-win outcomes); ability to execute with limited information and ambiguity Ability to deal with organizational politics including ability to navigate a highly matrixed organization effectively. Strong Influencing skills (sound business and technical acumen as well as skilled at achieving buy-in for delivery strategies) Stakeholder management (setting and managing expectations) Strong business acumen including ability to effectively articulate business objectives. Analytical skills, Highly Focused, Team player, Versatile, Resourceful Ability to learn and apply quickly including ability to effectively impart knowledge to others. Effective under pressure Precise communication skills, including an ability to project clarity and precision in verbal and written communication and strong presentation skills. Strong problem-solving and critical thinking skills Project Management Requirements gathering User interaction / customer service Reporting and Dashboards Ability to be flexible with job responsibilities and workflow changes. Ability to identify process improvements and implement changes; outside thinker. Problem-solving, consulting skills, teamwork, leadership, and creativity skills a must. Analytical mind with outstanding ability to collect and analyze data. Experience Required Qualified candidates will typically have 8 - 11 years of financial data and analytics work experience inclusive of disciplined project delivery with a focus on quality output within project timelines. Successful candidates will be high energy, self-starters with a focus on quality output and project delivery success. Candidates must be excellent problem solvers and creative thinkers. Experience Desired Desired Tool Experience & Project Practices: Microsoft Excel, Agile, Jira, Sharepoint, Confluence, Tableau, Alteryx, Virtualizer Tools Experience with Big Data Platforms (Databricks, Hadoop, AWS). Demonstrated experience establishing and delivering complex projects/initiatives within agreed upon parameters while achieving the benefits and/or value-added results. Experience with Agile delivery methodology. Location & Hours of Work (Hyderabad – Hybrid - 11.30AM IST to 8.30PM IST) Equal Opportunity Statement Evernorth is an Equal Opportunity Employer actively encouraging and supporting organization-wide involvement of staff in diversity, equity, and inclusion efforts to educate, inform and advance both internal practices and external work with diverse client populations. About Evernorth Health Services Evernorth Health Services, a division of The Cigna Group, creates pharmacy, care and benefit solutions to improve health and increase vitality. We relentlessly innovate to make the prediction, prevention and treatment of illness and disease more accessible to millions of people. Join us in driving growth and improving lives. Show more Show less
Posted 1 week ago
10.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
About the Job The Director Data Engineering will lead the development and implementation of a comprehensive data strategy that aligns with the organization’s business goals and enables data driven decision making. Roles and Responsibilities Build and manage a team of talented data managers and engineers with the ability to not only keep up with, but also pioneer, in this space Collaborate with and influence leadership to directly impact company strategy and direction Develop new techniques and data pipelines that will enable various insights for internal and external customers Develop deep partnerships with client implementation teams, engineering and product teams to deliver on major cross-functional measurements and testing Communicate effectively to all levels of the organization, including executives Provide success in partnering teams with dramatically varying backgrounds, from the highly technical to the highly creative Design a data engineering roadmap and execute the vision behind it Hire, lead, and mentor a world-class data team Partner with other business areas to co-author and co-drive strategies on our shared roadmap Oversee the movement of large amounts of data into our data lake Establish a customer-centric approach and synthesize customer needs Own end-to-end pipelines and destinations for the transfer and storage of all data Manage 3rd-party resources and critical data integration vendors Promote a culture that drives autonomy, responsibility, perfection and mastery. Maintain and optimize software and cloud expenses to meet financial goals of the company Provide technical leadership to the team in design and architecture of data products and drive change across process, practices, and technology within the organization Work with engineering managers and functional leads to set direction and ambitious goals for the Engineering department Ensure data quality, security, and accessibility across the organization Skills You Will Need 10+ years of experience in data engineering 5+ years of experience leading data teams of 30+ resources or more, including selection of talent planning / allocating resources across multiple geographies and functions. 5+ years of experience with GCP tools and technologies, specifically, Google BigQuery, Google cloud composer, Dataflow, Dataform, etc. Experience creating large-scale data engineering pipelines, data-based decision-making and quantitative analysis tools and software Experience with hands-on to code version control systems (git) Experience with CICD, data architectures, pipelines, quality, and code management Experience with complex, high volume, multi-dimensional data, based on unstructured, structured, and streaming datasets Experience with SQL and NoSQL databases Experience creating, testing, and supporting production software and systems Proven track record of identifying and resolving performance bottlenecks for production systems Experience designing and developing data lake, data warehouse, ETL and task orchestrating systems Strong leadership, communication, time management and interpersonal skills Proven architectural skills in data engineering Experience leading teams developing production-grade data pipelines on large datasets Experience designing a large data lake and lake house experience, managing data flows that integrate information from various sources into a common pool implementing data pipelines based on the ETL model Experience with common data languages (e.g. Python, Scala) and data warehouses (e.g. Redshift, BigQuery, Snowflake, Databricks) Extensive experience on cloud tools and technologies - GCP preferred Experience managing real-time data pipelines Successful track record and demonstrated thought-leadership and cross-functional influence and partnership within an agile / water-fall development environment. Experience in regulated industries or with compliance frameworks (e.g., SOC 2, ISO 27001). Nice to have: HR services industry experience Experience in data science, including predictive modeling Experience leading teams across multiple geographies Show more Show less
Posted 1 week ago
89.0 years
0 Lacs
Pune/Pimpri-Chinchwad Area
On-site
Company Description FK - Growth from Knowledge. For over 89 years, we have earned the trust of our clients around the world by solving critical questions in their decision-making process. We fuel their growth by providing a complete understanding of their consumers’ buying behavior, and the dynamics impacting their markets, brands and media trends. In 2023, GfK combined with NIQ, bringing together two industry leaders with unparalleled global reach. With a holistic retail read and the most comprehensive consumer insights - delivered with advanced analytics through state-of-the-art platforms - GfK drives “Growth from Knowledge”. Gfk is seeking a Machine Learning Engineer with hands-on Python experience and proven analytical and problem solving skills. You will be involved with various data engineering aspects - data collection, cleaning, and preprocessing, to training models and deploying them to production. The ideal candidate will possess strong technical and interpersonal skills, along with certain ML skills. In addition, the candidate will collaborate across multi-functional teams to achieve product milestones as agreed with stakeholders. Job Description Position Description Understanding business objectives and developing models that help to achieve them, along with metrics to track their progress Analyzing the ML algorithms that could be used to solve a given problem and ranking them by their success probability Exploring and visualizing data to gain an understanding of it, then identifying differences in data distribution that could affect performance when deploying the model in the real world Verifying data quality and ensuring it via data cleaning Defining validation strategies Defining the preprocessing or feature engineering to be done on a given dataset Defining data augmentation pipelines Finding available datasets that could be used for training Training models and tuning their hyperparameters Analyzing the errors of the model and designing strategies to overcome them Deploying models to production Work independently and collaboratively on a multi-disciplined project team in an Agile development environment Be actively involved in the design, development and testing activities for Big data product Provide feedback to development teams on code/architecture optimization Qualifications Required Skills and Experience 5+ years of experience in Machine learning with Devops 5+ years of hands-on experience developing Python ,PySpark Experience with Spark is preferred Possess a strong foundation in statistics and utilize statistical methods to analyze data and derive meaningful insights Familiarity with Azure Databricks or similar Proficiency with a deep learning frameworks such as TensorFlow or PyTorch or Keras Proficiency with Python and basic libraries for machine learning such as scikit-learn and pandas Expertise in visualizing and manipulating big datasets 5+ years of experience - Ability to select hardware to run an ML model with the required latency Familiarity with Azure services Proven experience with CI/CD Proven experience with version control ( Github, Bitbucket) Familiarity with Linux OS/concepts Strong written and verbal communication skills Self-motivated and ability to work well in a team Education Bachelor of Science degree from an accredited university Additional Information Enjoy a flexible and rewarding work environment with peer-to-peer recognition platforms Recharge and revitalize with help of wellness plans made for you and your family Plan your future with financial wellness tools Stay relevant and upskill yourself with career development opportunities Our Benefits Flexible working environment Volunteer time off LinkedIn Learning Employee-Assistance-Program (EAP) About NIQ NIQ is the world’s leading consumer intelligence company, delivering the most complete understanding of consumer buying behavior and revealing new pathways to growth. In 2023, NIQ combined with GfK, bringing together the two industry leaders with unparalleled global reach. With a holistic retail read and the most comprehensive consumer insights—delivered with advanced analytics through state-of-the-art platforms—NIQ delivers the Full View™. NIQ is an Advent International portfolio company with operations in 100+ markets, covering more than 90% of the world’s population. For more information, visit NIQ.com Want to keep up with our latest updates? Follow us on: LinkedIn | Instagram | Twitter | Facebook Our commitment to Diversity, Equity, and Inclusion NIQ is committed to reflecting the diversity of the clients, communities, and markets we measure within our own workforce. We exist to count everyone and are on a mission to systematically embed inclusion and diversity into all aspects of our workforce, measurement, and products. We enthusiastically invite candidates who share that mission to join us. We are proud to be an Equal Opportunity/Affirmative Action-Employer, making decisions without regard to race, color, religion, gender, gender identity or expression, sexual orientation, national origin, genetics, disability status, age, marital status, protected veteran status or any other protected class. Our global non-discrimination policy covers these protected classes in every market in which we do business worldwide. Learn more about how we are driving diversity and inclusion in everything we do by visiting the NIQ News Center: https://nielseniq.com/global/en/news-center/diversity-inclusion Show more Show less
Posted 1 week ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Databricks is a popular technology in the field of big data and analytics, and the job market for Databricks professionals in India is growing rapidly. Companies across various industries are actively looking for skilled individuals with expertise in Databricks to help them harness the power of data. If you are considering a career in Databricks, here is a detailed guide to help you navigate the job market in India.
The average salary range for Databricks professionals in India varies based on experience level: - Entry-level: INR 4-6 lakhs per annum - Mid-level: INR 8-12 lakhs per annum - Experienced: INR 15-25 lakhs per annum
In the field of Databricks, a typical career path may include: - Junior Developer - Senior Developer - Tech Lead - Architect
In addition to Databricks expertise, other skills that are often expected or helpful alongside Databricks include: - Apache Spark - Python/Scala programming - Data modeling - SQL - Data visualization tools
As you prepare for Databricks job interviews, make sure to brush up on your technical skills, stay updated with the latest trends in the field, and showcase your problem-solving abilities. With the right preparation and confidence, you can land your dream job in the exciting world of Databricks in India. Good luck!
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
16869 Jobs | Dublin
Wipro
9024 Jobs | Bengaluru
EY
7266 Jobs | London
Amazon
5652 Jobs | Seattle,WA
Uplers
5629 Jobs | Ahmedabad
IBM
5547 Jobs | Armonk
Oracle
5387 Jobs | Redwood City
Accenture in India
5156 Jobs | Dublin 2
Capgemini
3242 Jobs | Paris,France
Tata Consultancy Services
3099 Jobs | Thane