Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
3.0 years
0 Lacs
Gurugram, Haryana, India
On-site
Silverpush is at the forefront of AI-powered video advertising, delivering sophisticated video ad solutions that empower brands to achieve impactful campaigns within a privacy-centric environment. Operating across 30+ countries, we specialize in creating contextually relevant advertising experiences that drive genuine engagement and conversion. Silverpush's commitment to innovation and technological advancement enables us to navigate the evolving digital landscape, providing our partners with the tools necessary to connect with audiences on a global scale. We are dedicated to fostering a culture of creativity and excellence, driving the future of ad tech with integrity and foresight. For more information about Silverpush’s innovative advertising solutions, please visit www.silverpush.co. Responsibilities: ● Analyze complex datasets to identify trends, patterns, and correlations, and extract actionable insights that can inform strategic decisions. ● Design and build predictive models using statistical and machine learning techniques (e.g., regression, classification, XGBoost, clustering). ● Research and develop analyses and forecasting and optimization methods across ads performance, content performance modeling, and live experiments. ● Research and prototype using cutting-edge LLM technologies and generative AI to unlock new opportunities in personalization, targeting, and automation. Ideal Candidate Profile ● 3+ years of experience in Data Science, ideally in advertising or media-related domains. ● Degree in a quantitative discipline (e.g., Statistics, Computer Science, Mathematics, Masters in DS). ● Deep experience working with large-scale structured and unstructured data. ● Strong foundation in machine learning and statistical modeling. ● Familiar with building and deploying models in production (basic MLOps knowledge). ● Comfortable with NLP and computer vision, and interested in applying LLMs to real-world use cases. ● Excellent communication skills, with the ability to explain complex concepts to non-technical stakeholders. Technical Skills ● Languages & Tools: Python, PySpark, SQL ● ML Techniques: Regression, Classification, Clustering, Decision Trees, Random Forests, XGBoost, SVM ● LLM Tech: Familiarity with tools like OpenAI, Hugging Face, LangChain, and prompt engineering ● Data Infrastructure: ETL tools, Postgres, BigQuery/Snowflake, S3/GCP ● Statistical Analysis: A/B testing, experiment design, causal inference
Posted 2 weeks ago
0 years
0 Lacs
Gurugram, Haryana, India
On-site
we are hiring good GCP Data Engineers for Gurgaon Location. We are looking candidate should have strong experience in Bigdata, Pyspark, Python Or Java AND GCP, GCS, Bigquery, Dataflow, Dataproc, Pub sub, Storage. If you are good & strong on these expertise & can join us in 0-30 Days period. Please do share your resume at vaishali.tyagi@impetus.com Required Skill-Set Able to effectively use GCP managed services e.g. Dataproc, Dataflow, pub/sub, Cloud functions, Big Query, GCS - At least 4 of these Services. Good to have knowledge on Cloud Composer, Cloud SQL, Big Table, Cloud Function. Strong experience in Big Data technologies – Hadoop, Sqoop, Hive and Spark including DevOPs. Good hands on expertise on either Python or Java programming. Good Understanding of GCP core services like Google cloud storage, Google compute engine, Cloud SQL, Cloud IAM. Good to have knowledge on GCP services like App engine, GKE, Cloud Run, Cloud Built, Anthos. Ability to drive the deployment of the customers’ workloads into GCP and provide guidance, cloud adoption model, service integrations, appropriate recommendations to overcome blockers and technical road-maps for GCP cloud implementations. Experience with technical solutions based on industry standards using GCP - IaaS, PaaS and SaaS capabilities. Act as a subject-matter expert OR developer around GCP and become a trusted advisor to multiple teams.
Posted 2 weeks ago
6.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Job Description Data Engineer - Azure Databricks, Pyspark, Python, Airflow __Chennai/Pune India ( 6- 10 years exp only) YOU’LL BUILD TECH THAT EMPOWERS GLOBAL BUSINESSES Our Connect Technology teams are working on our new Connect platform, a unified, global, open data ecosystem powered by Microsoft Azure. Our clients around the world rely on Connect data and insights to innovate and grow. As a Junior Data Engineer, you’ll be part of a team of smart, highly skilled technologists who are passionate about learning and supporting cutting-edge technologies such as Spark, Scala, Pyspark, Databricks, Airflow, SQL, Docker, Kubernetes, and other Data engineering tools. These technologies are deployed using DevOps pipelines leveraging Azure, Kubernetes, Jenkins and Bitbucket/GIT Hub. Responsibilities Develop, test, troubleshoot, debug, and make application enhancements leveraging, Spark , Pyspark, Scala, Pandas, Databricks, Airflow, SQL as the core development technologies. Deploy application components using CI/CD pipelines. Build utilities for monitoring and automating repetitive functions. Collaborate with Agile cross-functional teams - internal and external clients including Operations, Infrastructure, Tech Ops Collaborate with Data Science team and productionize the ML Models. Participate in a rotational support schedule to provide responses to customer queries and deploy bug fixes in a timely and accurate manner. Qualifications 6-10 Years of years of applicable software engineering experience Strong fundamentals with experience in Bigdata technologies, Spark, Pyspark, Scala, Pandas, Databricks, Airflow, SQL, Must have experience in cloud technologies, preferably Microsoft Azure. Must have experience in performance optimization of Spark workloads. Good to have experience with DevOps Technologies as GIT Hub, Kubernetes, Jenkins, Docker. Good to have knowledge in Snowflakes Good to have knowledge of relational databases, preferably PostgreSQL. Excellent English communication skills, with the ability to effectively interface across cross-functional technology teams and the business Minimum B.S. degree in Computer Science, Computer Engineering or related field Additional Information Enjoy a flexible and rewarding work environment with peer-to-peer recognition platforms. Recharge and revitalize with help of wellness plans made for you and your family. Plan your future with financial wellness tools. Stay relevant and upskill yourself with career development opportunities Our Benefits Flexible working environment Volunteer time off LinkedIn Learning Employee-Assistance-Program (EAP) About NIQ NIQ is the world’s leading consumer intelligence company, delivering the most complete understanding of consumer buying behavior and revealing new pathways to growth. In 2023, NIQ combined with GfK, bringing together the two industry leaders with unparalleled global reach. With a holistic retail read and the most comprehensive consumer insights—delivered with advanced analytics through state-of-the-art platforms—NIQ delivers the Full View™. NIQ is an Advent International portfolio company with operations in 100+ markets, covering more than 90% of the world’s population. For more information, visit NIQ.com Want to keep up with our latest updates? Follow us on: LinkedIn | Instagram | Twitter | Facebook Our commitment to Diversity, Equity, and Inclusion NIQ is committed to reflecting the diversity of the clients, communities, and markets we measure within our own workforce. We exist to count everyone and are on a mission to systematically embed inclusion and diversity into all aspects of our workforce, measurement, and products. We enthusiastically invite candidates who share that mission to join us. We are proud to be an Equal Opportunity/Affirmative Action-Employer, making decisions without regard to race, color, religion, gender, gender identity or expression, sexual orientation, national origin, genetics, disability status, age, marital status, protected veteran status or any other protected class. Our global non-discrimination policy covers these protected classes in every market in which we do business worldwide. Learn more about how we are driving diversity and inclusion in everything we do by visiting the NIQ News Center: https://nielseniq.com/global/en/news-center/diversity-inclusion
Posted 2 weeks ago
3.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Job Description Data Engineer - Azure Databricks, Pyspark, Python, Airflow __Chennai/Pune India ( 3- 6 years exp only) YOU’LL BUILD TECH THAT EMPOWERS GLOBAL BUSINESSES Our Connect Technology teams are working on our new Connect platform, a unified, global, open data ecosystem powered by Microsoft Azure. Our clients around the world rely on Connect data and insights to innovate and grow. As a Junior Data Engineer, you’ll be part of a team of smart, highly skilled technologists who are passionate about learning and supporting cutting-edge technologies such as Spark, Scala, Pyspark, Databricks, Airflow, SQL, Docker, Kubernetes, and other Data engineering tools. These technologies are deployed using DevOps pipelines leveraging Azure, Kubernetes, Jenkins and Bitbucket/GIT Hub. Responsibilities Develop, test, troubleshoot, debug, and make application enhancements leveraging, Spark , Pyspark, Scala, Pandas, Databricks, Airflow, SQL as the core development technologies. Deploy application components using CI/CD pipelines. Build utilities for monitoring and automating repetitive functions. Collaborate with Agile cross-functional teams - internal and external clients including Operations, Infrastructure, Tech Ops Collaborate with Data Science team and productionize the ML Models. Participate in a rotational support schedule to provide responses to customer queries and deploy bug fixes in a timely and accurate manner. Qualifications 3-6 Years of years of applicable software engineering experience Strong fundamentals with experience in Bigdata technologies, Spark, Pyspark, Scala, Pandas, Databricks, Airflow, SQL, Must have experience in cloud technologies, preferably Microsoft Azure. Must have experience in performance optimization of Spark workloads. Good to have experience with DevOps Technologies as GIT Hub, Kubernetes, Jenkins, Docker. Good to have knowledge in Snowflakes Good to have knowledge of relational databases, preferably PostgreSQL. Excellent English communication skills, with the ability to effectively interface across cross-functional technology teams and the business Minimum B.S. degree in Computer Science, Computer Engineering or related field Additional Information Enjoy a flexible and rewarding work environment with peer-to-peer recognition platforms. Recharge and revitalize with help of wellness plans made for you and your family. Plan your future with financial wellness tools. Stay relevant and upskill yourself with career development opportunities Our Benefits Flexible working environment Volunteer time off LinkedIn Learning Employee-Assistance-Program (EAP) About NIQ NIQ is the world’s leading consumer intelligence company, delivering the most complete understanding of consumer buying behavior and revealing new pathways to growth. In 2023, NIQ combined with GfK, bringing together the two industry leaders with unparalleled global reach. With a holistic retail read and the most comprehensive consumer insights—delivered with advanced analytics through state-of-the-art platforms—NIQ delivers the Full View™. NIQ is an Advent International portfolio company with operations in 100+ markets, covering more than 90% of the world’s population. For more information, visit NIQ.com Want to keep up with our latest updates? Follow us on: LinkedIn | Instagram | Twitter | Facebook Our commitment to Diversity, Equity, and Inclusion NIQ is committed to reflecting the diversity of the clients, communities, and markets we measure within our own workforce. We exist to count everyone and are on a mission to systematically embed inclusion and diversity into all aspects of our workforce, measurement, and products. We enthusiastically invite candidates who share that mission to join us. We are proud to be an Equal Opportunity/Affirmative Action-Employer, making decisions without regard to race, color, religion, gender, gender identity or expression, sexual orientation, national origin, genetics, disability status, age, marital status, protected veteran status or any other protected class. Our global non-discrimination policy covers these protected classes in every market in which we do business worldwide. Learn more about how we are driving diversity and inclusion in everything we do by visiting the NIQ News Center: https://nielseniq.com/global/en/news-center/diversity-inclusion
Posted 2 weeks ago
10.0 - 14.0 years
0 Lacs
hyderabad, telangana
On-site
As a Big Data Architect working on a contract basis for a renowned client, you will be responsible for utilizing your expertise in technologies such as Hadoop, NoSQL, Spark, PySpark, Spark Time Streaming, Elastic Search, Kafka, Scala/Java, and ETL platforms including HBase, Cassandra, and MongoDB. Your primary role will involve ensuring the completion of surveys and addressing any queries promptly. You will play a crucial part in conceptualizing action plans by engaging with clients, Delivery Managers, vertical delivery heads, and service delivery heads. Your responsibilities will also include driving account-wise tracking of action plans aimed at enhancing Customer Satisfaction (CSAT) across various projects. You will be involved in conducting Quarterly pulse surveys for selected accounts or projects to ensure periodic check-ins and feedback collection. Furthermore, you will provide support to the Account Leadership teams in tracking and managing client escalations effectively to ensure timely closure. With over 10 years of experience and a solid educational background in Any Graduation, you will contribute to the success of projects in a hybrid work mode. Immediate availability to join is essential for this role based in Hyderabad.,
Posted 2 weeks ago
5.0 - 9.0 years
0 Lacs
karnataka
On-site
Thoucentric, the Consulting arm of Xoriant, a renowned digital engineering services company with 5000 employees, is looking for a skilled Integration Consultant with 5 to 6 years of experience to join their team. As a part of the Consulting business of Xoriant, you will be involved in Business Consulting, Program & Project Management, Digital Transformation, Product Management, Process & Technology Solutioning, and Execution across various functional areas such as Supply Chain, Finance & HR, Sales & Distribution in the US, UK, Singapore, and Australia. Your role will involve designing, building, and maintaining data pipelines and ETL workflows using tools like AWS Glue, CloudWatch, PySpark, APIs, SQL, and Python. You will be responsible for creating and optimizing scalable data pipelines, developing ETL workflows, analyzing and processing data, monitoring pipeline health, integrating APIs, and collaborating with cross-functional teams to provide effective solutions. **Key Responsibilities** - **Pipeline Creation and Maintenance:** Design, develop, and deploy scalable data pipelines ensuring data accuracy and integrity. - **ETL Development:** Create ETL workflows using AWS Glue and PySpark adhering to data governance and security standards. - **Data Analysis and Processing:** Write efficient SQL queries and develop Python scripts for data tasks automation. - **Monitoring and Troubleshooting:** Utilize AWS CloudWatch to monitor pipeline performance and resolve issues promptly. - **API Integration:** Integrate and manage APIs for connecting external data sources and services. - **Collaboration:** Work closely with cross-functional teams to understand data requirements and communicate effectively with stakeholders. **Required Skills and Qualifications** - **Experience:** 5-6 Years - **o9 solutions platform exp is Mandatory** - Strong expertise in AWS Glue, CloudWatch, PySpark, Python, and SQL. - Hands-on experience in API integration, ETL processes, and pipeline creation. - Strong analytical and problem-solving skills. - Familiarity with data security and governance best practices. **Preferred Skills** - Knowledge of other AWS services such as S3, EC2, Lambda, or Redshift. - Experience with Pyspark, API, SQL Optimization, Python. - Exposure to data visualization tools or frameworks. **Education:** - Bachelors degree in computer science, Information Technology, or a related field. In this role at Thoucentric, you will have the opportunity to define your career path, work in a dynamic consulting environment, collaborate with Fortune 500 companies and startups, and be part of a supportive working environment that encourages personal development. Join us in the exciting growth story of Thoucentric in Bangalore, India. (Posting Date: 05/22/2025),
Posted 2 weeks ago
3.0 years
0 Lacs
Bengaluru, Karnataka
On-site
Tesco India • Bengaluru, Karnataka, India • Hybrid • Full-Time • Permanent • Apply by 01-Aug-2025 About the role Refer to responsibilities What is in it for you At Tesco, we are committed to providing the best for you. As a result, our colleagues enjoy a unique, differentiated, market- competitive reward package, based on the current industry practices, for all the work they put into serving our customers, communities and planet a little better every day. Our Tesco Rewards framework consists of pillars - Fixed Pay, Incentives, and Benefits. Total Rewards offered at Tesco is determined by four principles - simple, fair, competitive, and sustainable. Salary - Your fixed pay is the guaranteed pay as per your contract of employment. Performance Bonus - Opportunity to earn additional compensation bonus based on performance, paid annually Leave & Time-off - Colleagues are entitled to 30 days of leave (18 days of Earned Leave, 12 days of Casual/Sick Leave) and 10 national and festival holidays, as per the company’s policy. Making Retirement Tension-FreeSalary - In addition to Statutory retirement beneets, Tesco enables colleagues to participate in voluntary programmes like NPS and VPF. Health is Wealth - Tesco promotes programmes that support a culture of health and wellness including insurance for colleagues and their family. Our medical insurance provides coverage for dependents including parents or in-laws. Mental Wellbeing - We offer mental health support through self-help tools, community groups, ally networks, face-to-face counselling, and more for both colleagues and dependents. Financial Wellbeing - Through our financial literacy partner, we offer one-to-one financial coaching at discounted rates, as well as salary advances on earned wages upon request. Save As You Earn (SAYE) - Our SAYE programme allows colleagues to transition from being employees to Tesco shareholders through a structured 3-year savings plan. Physical Wellbeing - Our green campus promotes physical wellbeing with facilities that include a cricket pitch, football field, badminton and volleyball courts, along with indoor games, encouraging a healthier lifestyle. You will be responsible for Job Summary: Build solutions for the real-world problems in workforce management for retail. You will work with a team of highly skilled developers and product managers throughout the entire software development life cycle of the products we own. In this role you will be responsible for designing, building, and maintaining our big data pipelines. Your primary focus will be on developing data pipelines using available tec hnologies. In this job, I’m accountable for: Following our Business Code of Conduct and always acting with integrity and due diligence and have these specific risk responsibilities: Represent Talent Acquisition in all forums/ seminars pertaining to process, compliance and audit Perform other miscellaneous duties as required by management Driving CI culture, implementing CI projects and innovation for withing the team Design and implement scalable and reliable data processing pipelines using Spark/Scala/Python &Hadoop ecosystem. Develop and maintain ETL processes to load data into our big data platform. Optimize Spark jobs and queries to improve performance and reduce processing time. Working with product teams to communicate and translate needs into technical requirements. Design and develop monitoring tools and processes to ensure data quality and availability. Collaborate with other teams to integrate data processing pipelines into larger systems. Delivering high quality code and solutions, bringing solutions into production. Performing code reviews to optimise technical performance of data pipelines. Continually look for how we can evolve and improve our technology, processes, and practices. Leading group discussions on system design and architecture. Manage and coach individuals, providing regular feedback and career development support aligned with business goals. Allocate and oversee team workload effectively, ensuring timely and high-quality outputs. Define and streamline team workflows, ensuring consistent adherence to SLAs and data governance practices. Monitor and report key performance indicators (KPIs) to drive continuous improvement in delivery efficiency and system uptime. Oversee resource allocation and prioritization, aligning team capacity with project and business demands. Key people and teams I work with in and outside of Tesco: People, budgets and other resources I am accountable for in my job: TBS & Tesco Senior Management TBS Reporting Team Tesco UK / ROI/ Central Europe Any other accountabilities by the business Business stakeholders Operational skills relevant for this job: Experience relevant for this job: Skills: ETL, YARN,Spark, Hive,Hadoop,PySpark/Python 7+ years of experience inbuilding and maintaining big data (anyone) Linux/Unix/Shell environments(anyone), Query platforms using Spark/Scala. optimisation Strong knowledge of distributed computing principles and big Good to have: Kafka, restAPI/reporting tools. data technologies such as Hadoop, Spark, Streaming etc. Experience with ETL processes and data modelling. Problem-solving and troubleshooting skills. Working knowledge on Oozie/Airflow. Experience in writing unit test cases, shell scripting. Ability to work independently and as part of a team in a fast-paced environment. You will need Refer to responsibilities About us Tesco in Bengaluru is a multi-disciplinary team serving our customers, communities, and planet a little better every day across markets. Our goal is to create a sustainable competitive advantage for Tesco by standardising processes, delivering cost savings, enabling agility through technological solutions, and empowering our colleagues to do even more for our customers. With cross-functional expertise, a wide network of teams, and strong governance, we reduce complexity, thereby offering high-quality services for our customers. Tesco in Bengaluru, established in 2004 to enable standardisation and build centralised capabilities and competencies, makes the experience better for our millions of customers worldwide and simpler for over 3,30,000 colleagues. Tesco Business Solutions: Established in 2017, Tesco Business Solutions (TBS) has evolved from a single entity traditional shared services in Bengaluru, India (from 2004) to a global, purpose-driven solutions-focused organisation. TBS is committed to driving scale at speed and delivering value to the Tesco Group through the power of decision science. With over 4,400 highly skilled colleagues globally, TBS supports markets and business units across four locations in the UK, India, Hungary, and the Republic of Ireland. The organisation underpins everything that the Tesco Group does, bringing innovation, a solutions mindset, and agility to its operations and support functions, building winning partnerships across the business. TBS's focus is on adding value and creating impactful outcomes that shape the future of the business. TBS creates a sustainable competitive advantage for the Tesco Group by becoming the partner of choice for talent, transformation, and value creation.
Posted 2 weeks ago
0.0 - 3.0 years
0 Lacs
Bengaluru, Karnataka
On-site
Kenvue is currently recruiting for a: Sr. MLOps Engineer What we do At Kenvue, we realize the extraordinary power of everyday care. Built on over a century of heritage and rooted in science, we’re the house of iconic brands - including NEUTROGENA®, AVEENO®, TYLENOL®, LISTERINE®, JOHNSON’S® and BAND-AID® that you already know and love. Science is our passion; care is our talent. Who We Are Our global team is ~ 22,000 brilliant people with a workplace culture where every voice matters, and every contribution is appreciated. We are passionate about insights, innovation and committed to delivering the best products to our customers. With expertise and empathy, being a Kenvuer means having the power to impact millions of people every day. We put people first, care fiercely, earn trust with science and solve with courage – and have brilliant opportunities waiting for you! Join us in shaping our future–and yours. Role reports to: Sr. Manager - DTO Strategy & Operation Location: Asia Pacific, India, Karnataka, Bangalore Work Location: Hybrid What you will do Kenvue is currently recruiting for: Sr. MLOps Engineer This position reports into Smart Manufacturing Capability Owner and is based at Bangalore. Who We Are At Kenvue, we realize the extraordinary power of everyday care. Built on over a century of heritage and rooted in science, we’re the house of iconic brands - including NEUTROGENA®, AVEENO®, TYLENOL®, LISTERINE®, JOHNSON’S® and BAND-AID® that you already know and love. Science is our passion; care is our talent. Our global team is made up with 22,000 diverse and brilliant people, passionate about insights, innovation and committed to deliver the best products to our customers. With expertise and empathy, being a Kenvuer means to have the power to impact life of millions of people every day. We put people first, care fiercely, earn trust with science and solve with courage – and have brilliant opportunities waiting for you! Join us in shaping our future–and yours. What You Will Do The Sr. MLOps Engineer will drive the development and optimization of machine learning pipelines in a production environment. You will be responsible for creating reusable templates for different machine learning use cases, ensuring efficient model deployment and monitoring. This role requires hands-on experience with Azure Machine Learning, Databricks, and PySpark, as well as proficiency in managing CI/CD workflows with Bitbucket and Jenkins. Expertise in SonarQube, AKS, API management, and model optimization are also critical. Key Responsibilities Design, implement, and manage scalable machine learning (ML) pipelines using Azure ML, Databricks, and PySpark. Build and maintain automated CI/CD pipelines with Github and Github Action, incorporating SonarQube to ensure code quality and security standards. Utilize Azure Kubernetes Service (AKS) to containerize and deploy machine learning models, ensuring high availability and scalability. Have understanding of over all architecture and can work on scalable solutions Develop reusable templates for various ML use cases to streamline the model deployment process and enhance operational efficiency. Design and manage APIs to facilitate seamless interaction between ML models and other applications, ensuring robust, secure, and scalable API interfaces. Perform model optimization, monitor data drift, data refresh checks, and ensure the ML pipelines are cost-efficient. Implement cost monitoring and management strategies to ensure efficient use of resources, particularly for model training and deployment phases. Work closely with data scientists, DevOps, and IT teams to deploy and manage machine learning models across environments. Provide thorough documentation for ML workflows, pipeline templates, and optimization strategies to support cross-team collaboration. What We Are Looking For Required Qualifications Bachelor's degree in engineering, computer science, or related field. 4 – 6 years of total work experience, with at least 2-3 years of experience working with Azure ML-Ops tool stack. Strong knowledge of solution architecture and/or machine learning with a focus on MLOps. Hands-on experience in deploying and maintaining machine learning models in production. Strong understanding of DevOps practices, particularly in cloud environments. Knowledge of containerization tools such as Docker and orchestration frameworks like Kubernetes. Excellent problem-solving skills and ability to work in a collaborative, fast-paced environment. Experience deploying MLOPs solutions on AKS or API platforms. Hands on experience with Azure Machine Learning and Databricks Knowledge of code quality automation using tools like Sonar Cube Desired Qualifications Familiarity with solution architecture is a plus Good to have certification of azure – AI 900, or DP 100 or AZ 305 What’s In It For You Competitive Benefit Package Paid Company Holidays, Paid Vacation, Volunteer Time, Summer Fridays & More! Learning & Development Opportunities Employee Resource Groups This list could vary based on location/region Kenvue is proud to be an Equal Opportunity Employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identify, age, national origin, or protected veteran status and will not be discriminated against on the basis of disability. If you are an individual with a disability, please check our Disability Assistance page for information on how to request an accommodation.
Posted 2 weeks ago
0.0 - 7.0 years
0 Lacs
Chennai, Tamil Nadu
On-site
Job Information Date Opened 07/25/2025 Job Type Permanent RSD NO 11513 Industry IT Services Min Experience 5 Max Experience 7 City Chennai State/Province Tamil Nadu Country India Zip/Postal Code 600018 Job Description Data Scientist – Retail/CPG Domain Are you a data-driven problem solver with a passion for Retail and CPG analytics? We’re looking for a contract-based Data Scientist to join our team and deliver high-impact insights that drive business decisions. What You’ll Work On: Category & Product Analytics – Product assortment, category performance, and new product launches Sales Data Analysis – Sales trends, forecasting, and identifying key drivers Customer Analytics – Loyalty data and campaign effectiveness (good to have) Key Skills: Advanced data analysis and ML model development Experience with CI/CD pipelines for ML workflows (MLflow, Azure ML) Strong in Python, SQL, Databricks; PySpark is a plus Power BI experience is a plus for building impactful dashboards and visualizations Ability to translate insights into compelling stories for senior stakeholders Deep understanding of Retail and FMCG domain challenges Experience Required: 5–7 years in analytics Location: Bangalore / Chennai At Indium diversity, equity, and inclusion (DEI) are the cornerstones of our values. We champion DEI through a dedicated council, expert sessions, and tailored training programs, ensuring an inclusive workplace for all. Our initiatives, including the WE@IN women empowerment program and our DEI calendar, foster a culture of respect and belonging. Recognized with the Human Capital Award, we are committed to creating an environment where every individual thrives. Join us in building a workplace that values diversity and drives innovation.
Posted 2 weeks ago
5.0 - 9.0 years
0 Lacs
ahmedabad, gujarat
On-site
As a Senior Data Analytics Engineer at Ajmera Infotech Private Limited (AIPL), you will have the opportunity to power mission-critical decisions with governed insights using cutting-edge technologies and solutions. Ajmera Infotech is a reputable company that builds planet-scale software for NYSE-listed clients in highly regulated domains such as HIPAA, FDA, and SOC 2. Our team of 120 engineers specializes in delivering production-grade systems that provide strategic advantages through data-driven decision-making. You will play a crucial role in building end-to-end analytics solutions, from lake house pipelines to real-time dashboards, ensuring fail-safe engineering practices with TDD, CI/CD, DAX optimization, Unity Catalog, and cluster tuning. Working with a modern stack including Databricks, PySpark, Delta Lake, Power BI, and Airflow, you will have the opportunity to create impactful solutions that drive business success. At AIPL, you will be part of a mentorship culture where you can lead code reviews, share best practices, and grow as a domain expert. You will work in a mission-critical context, helping enterprises migrate legacy analytics into cloud-native, governed platforms with a compliance-first mindset in HIPAA-aligned environments. Key Responsibilities: - Build scalable pipelines using SQL, PySpark, Delta Live Tables on Databricks. - Orchestrate workflows with Databricks Workflows or Airflow; implement SLA-backed retries and alerting. - Design dimensional models (star/snowflake) with Unity Catalog and Great Expectations validation. - Deliver robust Power BI solutions including dashboards, semantic layers, and paginated reports, focusing on DAX optimization. - Migrate legacy SSRS reports to Power BI with zero loss of logic or governance. - Optimize compute and cost through cache tuning, partitioning, and capacity monitoring. - Document pipeline logic, RLS rules, and more in Git-controlled formats. - Collaborate cross-functionally to convert product analytics needs into resilient BI assets. - Champion mentorship by reviewing notebooks, dashboards, and sharing platform standards. Must-Have Skills: - 5+ years in analytics engineering, with 3+ years in production Databricks/Spark contexts. - Proficiency in advanced SQL (including windowing), expert PySpark, Delta Lake, and Unity Catalog. - Mastery of Power BI including DAX optimization, security rules, and paginated reports. - Experience in SSRS-to-Power BI migration with RDL logic replication. - Strong Git, CI/CD familiarity, and cloud platform know-how (Azure/AWS). - Excellent communication skills to bridge technical and business audiences. Nice-to-Have Skills: - Databricks Data Engineer Associate certification. - Experience with streaming pipelines (Kafka, Structured Streaming). - Familiarity with data quality frameworks such as dbt, Great Expectations, or similar tools. - BI diversity including experience with Tableau, Looker, or similar platforms. - Knowledge of cost governance (Power BI Premium capacity, Databricks chargeback). Join us at AIPL and enjoy a competitive salary package with performance-based bonuses, along with comprehensive health insurance for you and your family. Take on this exciting opportunity to make a significant impact in the world of data analytics and engineering.,
Posted 2 weeks ago
7.0 - 11.0 years
0 Lacs
hyderabad, telangana
On-site
As a Databricks Engineer-Lead, you will be responsible for designing and developing ETL pipelines using Azure Data Factory for data ingestion and transformation. You will collaborate with various Azure stack modules such as Data Lakes and SQL Data Warehouse to create robust data solutions. Your role will involve writing efficient SQL, Python, and PySpark code for data processing and transformation. It is essential to understand and translate business requirements into technical designs, develop mapping documents, and adhere to transformation rules as per the project scope. Effective communication with stakeholders to ensure smooth project execution is a crucial aspect of this role. To excel in this position, you should possess 7-10 years of experience in data ingestion, data processing, and analytical pipelines involving big data and relational databases. Hands-on experience with Azure services like Azure Data Lake Storage, Azure Databricks, Azure Data Factory, Azure Synapse Analytics, and Azure SQL Database is required. Proficiency in SQL, Python, and PySpark for data manipulation is essential. Familiarity with DevOps practices and CI/CD deployments is a plus. Strong communication skills and attention to detail, especially in high-pressure situations, are highly valued in this role. Previous experience in the insurance or financial industry is preferred. This role is based in Hyderabad and requires the selected candidate to work from the office. If you are passionate about leveraging Databricks, PySpark, SQL, and other Azure technologies to drive innovative data solutions, this position offers an exciting opportunity to lead and contribute to impactful projects.,
Posted 2 weeks ago
5.0 - 9.0 years
0 Lacs
karnataka
On-site
Wipro Limited is a leading technology services and consulting company that focuses on creating innovative solutions to meet the complex digital transformation needs of clients. With a holistic portfolio of capabilities in consulting, design, engineering, and operations, Wipro helps clients achieve their boldest ambitions and develop sustainable, future-ready businesses. With a global presence of over 230,000 employees and business partners across 65 countries, Wipro is committed to helping customers, colleagues, and communities thrive in an ever-changing world. We are currently looking for an ETL Test Lead with the following qualifications: Primary Skill: ETL Testing Secondary Skill: Azure Key Requirements: - 5+ years of experience in data warehouse testing, with at least 2 years of experience in Azure Cloud - Strong understanding of data marts and data warehouse concepts - Expertise in SQL with the ability to create source-to-target comparison test cases - Proficient in creating test plans, test cases, traceability matrix, and closure reports - Familiarity with Pyspark, Python, Git, Jira, and JTM Band: B3 Location: Pune, Chennai, Coimbatore, Bangalore Mandatory Skills: ETL Testing Experience: 5-8 Years At Wipro, we are in the process of building a modern organization that is focused on digital transformation. We are looking for individuals who are inspired by reinvention and are committed to evolving themselves, their careers, and their skills. Join us in our journey to constantly evolve and adapt to the changing world around us. Come to Wipro and realize your ambitions. We welcome applications from individuals with disabilities.,
Posted 2 weeks ago
5.0 - 9.0 years
0 Lacs
chennai, tamil nadu
On-site
You should have a minimum of 5-7 years of experience in data engineering and transformation on the Cloud, with a strong focus on Azure Data Engineering and Databricks for at least 3 years. Your expertise should include supporting and developing data warehouse workloads at an enterprise level. Proficiency in pyspark is essential for developing and deploying workloads to run on the Spark distributed computing platform. A Bachelor's degree in Computer Science, Information Technology, Engineering (Computer/Telecommunication), or a related field is required for this role. Experience with cloud deployment, preferably on Microsoft Azure, is highly desirable. You should also have experience in implementing platform and application monitoring using Cloud native tools, as well as implementing application self-healing solutions through proactive and reactive automated measures.,
Posted 2 weeks ago
3.0 - 7.0 years
0 Lacs
haryana
On-site
As an Associate Managing Consultant in Strategy & Transformation at Mastercard's Performance Analytics division, you will be a part of the Advisors & Consulting Services group specializing in translating data into actionable insights. Your role will involve leveraging both Mastercard and customer data to design, implement, and scale analytical solutions for clients. By utilizing qualitative and quantitative analytical techniques and enterprise applications, you will synthesize analyses into clear recommendations and impactful narratives. In this position, you will manage deliverable development and workstreams on projects spanning various industries and problem statements. You will contribute to developing analytics strategies for large clients, leveraging data and technology solutions to unlock client value. Building and maintaining trusted relationships with client managers will be crucial, as you act as a reliable partner in creating predictive models and reviewing analytics end-products for accuracy, quality, and timeliness. Collaboration and teamwork play a significant role in this role, where you will be tasked with developing sound business recommendations, delivering effective client presentations, and leading team and external meetings. Your responsibilities will also include contributing to the firm's intellectual capital, mentoring junior consultants, and fostering effective working relationships with local and global teams. To be successful in this role, you should possess an undergraduate degree with experience in data and analytics, business intelligence, and descriptive, predictive, or prescriptive analytics. You should be adept at analyzing large datasets, synthesizing key findings, and providing recommendations through descriptive analytics and business intelligence. Proficiency in data analytics software such as Python, R, SQL, and SAS, as well as advanced skills in Word, Excel, and PowerPoint, are essential. Effective communication in English and the local office language, eligibility to work in the country of application, and a proactive attitude towards learning and growth are also required. Preferred qualifications for this role include additional experience working with the Hadoop framework, data visualization tools like Tableau and Power BI, and coaching junior delivery consultants. While an MBA or master's degree with a relevant specialization is not mandatory, having relevant industry expertise would be advantageous. At Mastercard, we prioritize information security, and every individual associated with the organization is expected to abide by security policies, maintain the confidentiality and integrity of accessed information, report any security violations or breaches, and complete required security trainings to ensure the protection of Mastercard's assets, information, and networks.,
Posted 2 weeks ago
3.0 - 7.0 years
0 Lacs
gujarat
On-site
As a Data Scientist at Micron Technology in Sanand Gujarat, you will have the opportunity to play a pivotal role in transforming how the world uses information to enrich life for all. Micron Technology is a global leader in innovating memory and storage solutions, driving the acceleration of information into intelligence and inspiring advancements in learning, communication, and progress. Your responsibilities will involve a broad range of tasks, including but not limited to: - Developing a strong career path as a Data Scientist in highly automated industrial manufacturing, focusing on analysis and machine learning of terabytes and petabytes of diverse datasets. - Extracting data from various databases using SQL and other query languages, and applying data cleansing, outlier identification, and missing data techniques. - Applying the latest mathematical and statistical techniques to analyze data and identify patterns. - Building web applications as part of your job scope. - Utilizing Cloud-based Analytics and Machine Learning Modeling. - Building APIs for application integration. - Engaging in statistical modeling, feature extraction and analysis, feature engineering, supervised/unsupervised/semi-supervised learning. - Demonstrating proficiency in data analysis and validation, as well as strong software development skills. In addition to the above, you should possess above-average skills in: - Programming fluency in Python. - Knowledge of statistics, machine learning, and other advanced analytical methods. - Familiarity with javascript, AngularJS 2.0, and Tableau, with a background in OOPS considered an advantage. - Understanding pySpark and/or libraries for distributed and parallel processing. - Experience with Tensorflow and/or other statistical software with scripting capabilities. - Knowledge of time series data, images, semi-supervised learning, and data with frequently changing distributions is a plus. - Understanding of Manufacturing Execution Systems (MES) is beneficial. You should be able to work in a dynamic, fast-paced environment, be self-motivated, adaptable to new technologies, and possess a passion for data and information with excellent analytical, problem-solving, and organizational skills. Furthermore, effective communication with distributed teams (written, verbal, and presentation) and the ability to work collaboratively towards common objectives are key attributes for this role. To be eligible for this position, you should hold a Bachelors or Masters degree in Computer Science or Electrical/Electronic Engineering, with a CGPA of 7.0 and above. Join Micron Technology, Inc., where our relentless focus on customers, technology leadership, and operational excellence drives the creation of high-performance memory and storage products that power the data economy. Visit micron.com/careers to learn more about our innovative solutions and opportunities for growth. For any assistance with the application process or to request reasonable accommodations, please reach out to hrsupport_india@micron.com. Micron Technology strictly prohibits the use of child labor and complies with all applicable laws, rules, regulations, and international labor standards. Candidates are encouraged to use AI tools to enhance their application materials, ensuring accuracy and truthfulness in representing their skills and experiences. Fabrication or misrepresentation will lead to immediate disqualification. As a Data Scientist at Micron Technology, you will be part of a transformative journey that shapes the future of information utilization and enriches lives across the globe.,
Posted 2 weeks ago
5.0 - 10.0 years
0 Lacs
karnataka
On-site
The role of S&C GN AI - Insurance AI Generalist Consultant at Accenture Global Network involves driving strategic initiatives, managing business transformations, and leveraging industry expertise to create value-driven solutions. As a Team Lead/Consultant at Bengaluru, BDC7C location, you will provide strategic advisory services, conduct market research, and develop data-driven recommendations to enhance business performance. In this position, you will be part of a unified powerhouse that combines the capabilities of Strategy & Consulting with Data and Artificial Intelligence. You will work on architecting, designing, building, deploying, delivering, and monitoring advanced analytics models, including Generative AI, for various client problems. Additionally, you will develop functional aspects of Generative AI pipelines and interface with clients to understand engineering/business problems. The ideal candidate for this role should have 5+ years of experience in data-driven techniques, a Bachelor's/Master's degree in Mathematics, Statistics, Economics, Computer Science, or a related field, and a solid foundation in Statistical Modeling and Machine Learning algorithms. Proficiency in programming languages such as Python, PySpark, SQL, Scala is required, as well as experience implementing AI solutions for the Insurance industry. Strong communication, collaboration, and presentation skills are essential to effectively convey complex data insights and recommendations to clients and stakeholders. Furthermore, hands-on experience with Azure, AWS, or Databricks tools is a plus, and familiarity with GenAI, LLMs, RAG architecture, and Lang chain frameworks is beneficial. This role offers an opportunity to work on innovative projects, career growth, and leadership exposure within Accenture, a global community that continually pushes the boundaries of business capabilities. If you are a motivated individual with strong analytical, problem-solving, and communication skills, and the ability to thrive in a fast-paced, dynamic environment, this role provides an exciting opportunity to contribute to Accenture's future growth and be a part of a vibrant global community.,
Posted 2 weeks ago
6.0 - 10.0 years
0 Lacs
kolkata, west bengal
On-site
We are looking for an experienced professional with strong mathematical and statistical expertise, as well as a natural curiosity and creative mindset to uncover hidden opportunities within data. Your primary goal will be to realize the full potential of the data by asking questions, connecting dots, and thinking innovatively. Responsibilities: - Design and implement scalable and efficient data storage solutions using Snowflake. - Write, optimize, and troubleshoot SQL queries within the Snowflake environment. - Provide forward-thinking solutions in the data engineering and analytics space. - Collaborate with DW/BI leads to understand new ETL pipeline development requirements. - Identify gaps in existing pipelines and resolve issues. - Develop data models to meet reporting needs by working closely with the business. - Assist team members in resolving technical challenges. - Engage in technical discussions with client architects and team members. - Orchestrate data pipelines in scheduler via Airflow. - Integrate Snowflake with various data sources and third-party tools. Skills and Qualifications: - Bachelor's and/or master's degree in computer science or equivalent experience. - Minimum 7 years of experience in Data & Analytics with strong communication and presentation skills. - At least 6 years of experience in Snowflake implementations and large-scale data warehouse end-to-end implementation. - Databricks certified architect. - Proficiency in SQL and scripting languages (e.g., Python, Spark, PySpark) for data manipulation and automation. - Solid understanding of cloud platforms (AWS, Azure, GCP) and their integration with data tools. - Familiarity with data governance and data management practices. - Exposure to Data sharing, unity catalog, DBT, replication tools, and performance tuning will be advantageous. About Tredence: Tredence focuses on delivering powerful insights into profitable actions by combining business analytics, data science, and software engineering. We work with leading companies worldwide, providing prediction and optimization solutions at scale. Headquartered in the San Francisco Bay Area, we serve clients in the US, Canada, Europe, and Southeast Asia. Tredence is an equal opportunity employer that values diversity and is dedicated to fostering an inclusive environment for all employees. To learn more about us, visit our website: [Tredence Website](https://www.tredence.com/),
Posted 2 weeks ago
5.0 - 9.0 years
0 Lacs
pune, maharashtra
On-site
As a Principal Engineer / Architect at our organization, you will be responsible for combining deep technical expertise with strategic thinking to design and implement scalable, secure, and modern digital systems. This senior technical leadership role requires hands-on architecture experience, a strong command of cloud-native development, and a successful track record of leading teams through complex solution delivery. Your role will involve collaborating with cross-functional teams including engineering, product, DevOps, and business stakeholders to define technical roadmaps, ensure alignment with enterprise architecture principles, and guide platform evolution. Key Responsibilities: Architecture & Design: - Lead the design of modular, microservices-based, and secure architecture for scalable digital platforms. - Define and enforce cloud-native architectural best practices using Azure, AWS, or GCP. - Prepare high-level design artefacts, interface contracts, data flow diagrams, and service blueprints. Cloud Engineering & DevOps: - Drive infrastructure design and automation using Terraform or CloudFormation. - Support Kubernetes-based container orchestration and efficient CI/CD pipelines. - Optimize for performance, availability, cost, and security using modern observability stacks and metrics. Data & API Strategy: - Architect systems that handle structured and unstructured data with performance and reliability. - Design APIs with reusability, governance, and lifecycle management in mind. - Guide caching, query optimization, and stream/batch data pipelines across the stack. Technical Leadership: - Act as a hands-on mentor to engineering teams, leading by example and resolving architectural blockers. - Review technical designs, codebases, and DevOps pipelines to uphold engineering excellence. - Translate strategic business goals into scalable technology solutions with pragmatic trade-offs. Key Requirements: Must Have: - 5+ years in software architecture or principal engineering roles with real-world system ownership. - Strong experience in cloud-native architecture with AWS, Azure, or GCP (certification preferred). - Programming experience with Java, Python, or Node.js, and frameworks like Flask, FastAPI, Celery. - Proficiency with PostgreSQL, MongoDB, Redis, and scalable data design patterns. - Expertise in Kubernetes, containerization, and GitOps-style CI/CD workflows. - Strong foundation in Infrastructure as Code (Terraform, CloudFormation). - Excellent verbal and written communication; proven ability to work across technical and business stakeholders. Nice to Have: - Experience in MLOps pipelines, observability stacks (ELK, Prometheus/Grafana), and tools like MLflow, Langfuse. - Familiarity with Generative AI frameworks (LangChain, LlamaIndex), Vector Databases (Milvus, ChromaDB). - Understanding of event-driven, serverless, and agentic AI architecture models. - Python libraries such as pandas, NumPy, PySpark and support for multi-component pipelines (MCP). Preferred: - Prior experience leading technical teams in regulated domains (finance, healthcare, govtech). - Cloud security, cost optimization, and compliance-oriented architectural mindset. What You'll Gain: - Work on mission-critical projects using the latest cloud, data, and AI technologies. - Collaborate with a world-class, cross-disciplinary team. - Opportunities to contribute to open architecture, reusable frameworks, and technical IP. - Career advancement via leadership, innovation labs, and enterprise architecture pathways. - Competitive compensation, flexibility, and a culture that values innovation and impact.,
Posted 2 weeks ago
5.0 years
0 Lacs
Gurgaon, Haryana, India
On-site
We are looking for Data Engineers with expertise in SAS, Python, and PySpark to support code migration and data migration projects from legacy environments to cloud platforms. This role will entail hands-on experience leveraging EXL’s Generative AI solution named Code Harbor to streamline migration processes, automate code refactoring, and optimize data transformation. The ideal candidate will have 5+ years of relevant experience in IT services, with strong knowledge of modernizing data pipelines, transforming legacy codebases, and optimizing big data processing for cloud infrastructure. Key Responsibilities Code migration from SAS/ legacy systems to Python/ cloud-native frameworks. Develop and optimize enhanced data pipelines using PySpark for efficient cloud-based processing. Refactor and modernize legacy SAS-based workflows, ensuring seamless AI-assisted translation for cloud execution. Ensure data integrity, security, and performance throughout the migration lifecycle. Troubleshoot AI-generated outputs to refine accuracy and resolve migration-related challenges. Required Skills & Qualifications Strong expertise in SAS, Python, and PySpark, with experience in code migration and data transformation. Strong problem-solving skills and adaptability in fast-paced AI-driven migration projects. Excellent communication and collaboration skills to work with cross-functional teams. Education Background Bachelor’s or master’s degree in computer science, Engineering, or a related field. Tier I/II candidates preferred.
Posted 2 weeks ago
2.0 - 3.0 years
0 Lacs
Gurgaon, Haryana, India
On-site
We are looking for Data Engineers with expertise in SAS, Python, and PySpark to support code migration and data migration projects from legacy environments to cloud platforms. This role will entail hands-on experience leveraging EXL’s Generative AI solution named Code Harbor to streamline migration processes, automate code refactoring, and optimize data transformation. The ideal candidate will have 2-3 years of relevant experience in IT services, with strong knowledge of modernizing data pipelines, transforming legacy codebases, and optimizing big data processing for cloud infrastructure. Key Responsibilities Code migration from SAS/ legacy systems to Python/ cloud-native frameworks. Develop and optimize enhanced data pipelines using PySpark for efficient cloud-based processing. Refactor and modernize legacy SAS-based workflows, ensuring seamless AI-assisted translation for cloud execution. Ensure data integrity, security, and performance throughout the migration lifecycle. Troubleshoot AI-generated outputs to refine accuracy and resolve migration-related challenges. Required Skills & Qualifications Strong expertise in SAS, Python, and PySpark, with experience in code migration and data transformation. Strong problem-solving skills and adaptability in fast-paced AI-driven migration projects. Excellent communication and collaboration skills to work with cross-functional teams. Education Background Bachelor’s or master’s degree in computer science, Engineering, or a related field. Tier I/II candidates preferred.
Posted 2 weeks ago
3.0 - 7.0 years
0 Lacs
tamil nadu
On-site
Wipro Limited is a leading technology services and consulting company committed to developing innovative solutions that cater to clients" most intricate digital transformation requirements. With a comprehensive range of capabilities in consulting, design, engineering, and operations, we empower clients to achieve their most ambitious goals and establish sustainable, future-ready businesses. Our global presence spans across 65 countries with over 230,000 employees and business partners, dedicated to supporting our customers, colleagues, and communities in thriving amidst an ever-evolving world. We are currently seeking a Sr ETL Test Engineer with the following qualifications: - Primary Skill: ETL Testing - Secondary Skill: Azure The ideal candidate should possess: - At least 5 years of experience in data warehouse testing and a minimum of 2 years of Azure Cloud experience. - Profound understanding of data marts and data warehouse concepts. - Proficiency in SQL, with the ability to develop source-to-target comparison test cases in SQL. - Capability to create test plans, test cases, traceability matrix, and closure reports. - Proficient in Pyspark, Python, Git, Jira, JTM. Location: Pune, Chennai, Coimbatore, Bangalore Band: B2 and B3 Mandatory Skills: ETL Testing Experience Required: 3-5 Years Join us in reinventing the future at Wipro. We are transforming into a modern organization, striving to be an end-to-end digital transformation partner with the most audacious aspirations. We are looking for individuals who are inspired by reinvention, eager to evolve themselves, their careers, and their skills. At Wipro, we embrace change as it is inherent in our DNA. We invite you to be part of a purpose-driven business that encourages you to craft your own reinvention. Realize your aspirations with us at Wipro. We welcome applications from individuals with disabilities.,
Posted 2 weeks ago
5.0 - 9.0 years
0 Lacs
karnataka
On-site
The Specialized Analytics Manager role provides full leadership and supervisory responsibility within a team. You will offer operational/service leadership and guidance to the team, applying your in-depth disciplinary knowledge to provide value-added perspectives and advisory services. Your responsibilities may include contributing to the development of new techniques, models, and plans within your area of expertise. Excellent communication and diplomacy skills are essential for this role. You will be responsible for the volume, quality, and timeliness of end results, as well as shared responsibility for planning and budgets. Your work will have a significant impact on the overall performance and effectiveness of the sub-function/job family. As a manager, you will oversee the motivation and development of the team through professional leadership, including performance evaluation, compensation, hiring, disciplinary actions, terminations, and daily task direction. In this role, you will work with large and complex data sets, both internal and external, to evaluate, recommend, and support the implementation of business strategies. This will involve identifying and compiling data sets using tools such as SAS, SQL, and Pyspark to help predict, improve, and measure the success of key business outcomes. You will be responsible for documenting data requirements, data collection, processing, cleaning, and exploratory data analysis, which may involve utilizing statistical models, algorithms, and data visualization techniques. Individuals in this role may often be referred to as Data Scientists and will specialize in digital and marketing analytics. Additionally, you will need to appropriately assess risk when making business decisions, with a focus on safeguarding Citigroup, its clients, and assets. This includes driving compliance with laws, rules, and regulations, adhering to policies, applying ethical judgment, and effectively supervising the activity of others to maintain high standards of conduct. Qualifications for this role include experience in a People Manager position, a strong understanding of Adobe Analytics, proficiency in SAS and Python, excellent communication skills for coordination with senior business leaders, a good grasp of financials and PNL metrics, background in financial services with an understanding of credit card business, and preferably exposure to Digital Business and knowledge of Digital Performance KPIs. This job description offers a comprehensive overview of the responsibilities and qualifications required for the role. Other job-related duties may be assigned as necessary.,
Posted 2 weeks ago
5.0 - 9.0 years
0 Lacs
pune, maharashtra
On-site
As an ideal candidate for this role, you will be responsible for designing and architecting scalable Big Data solutions within the Hadoop ecosystem. Your key duties will include leading architecture-level discussions for data platforms and analytics systems, constructing and optimizing data pipelines utilizing PySpark and other distributed computing tools, translating business requirements into scalable data models and integration workflows, as well as ensuring the high performance and availability of enterprise-grade data processing systems. In addition, you will play a crucial role in mentoring development teams and offering guidance on best practices and performance tuning. To excel in this position, you must possess architect-level experience with Big Data ecosystem and enterprise data solutions. Proficiency in Hadoop, PySpark, and distributed data processing frameworks is essential, along with strong hands-on experience in SQL and data warehousing concepts. A deep understanding of data lake architecture, data ingestion, ETL, and orchestration tools is also required. Your experience in performance optimization and handling large-scale data sets, coupled with excellent problem-solving, design, and analytical skills, will be highly valued. While not mandatory, exposure to cloud platforms like AWS, Azure, or GCP for data solutions would be a beneficial asset. Additionally, familiarity with data governance, data security, and metadata management is considered a good-to-have skill set for this role. Joining our team offers you the opportunity to work with cutting-edge Big Data technologies, gain leadership exposure, and directly participate in architectural decisions. This is a stable, full-time position within a top-tier tech team, offering a conducive work-life balance with a standard 5-day working schedule. If you are passionate about Big Data technologies and eager to contribute to innovative solutions, we welcome your application for this exciting opportunity.,
Posted 2 weeks ago
5.0 - 9.0 years
0 Lacs
chennai, tamil nadu
On-site
You should have 5+ years of experience in data analysis, engineering, and science. Your proficiency should include Azure Data Factory, Azure DataBricks, Python, PySpark, SQL, and PLSQL or SAS. Your responsibilities will involve designing, developing, and maintaining ETL pipelines using Azure Data Bricks, Azure Data Factory, and other relevant technologies. You will be expected to manage and optimize data storage solutions using Azure Data Lake Storage (ADLS) and develop and deploy data processing workflows using Pyspark and Python. Collaboration with data scientists, analysts, and stakeholders to understand data requirements and ensure data quality is essential. Implementing data integration solutions, ensuring seamless data flow across systems, and utilizing Github for version control and collaboration on the codebase are also part of the role. Monitoring and troubleshooting data pipelines to guarantee data accuracy and availability is crucial. It is imperative to stay updated with the latest industry trends and best practices in data engineering.,
Posted 2 weeks ago
4.0 - 8.0 years
0 Lacs
hyderabad, telangana
On-site
The primary focus of this role will be to perform development work within the Azure Data Lake environment and other related ETL technologies. You will be responsible for ensuring on-time and on-budget delivery, satisfying project requirements while adhering to enterprise architecture standards. In addition, this role will involve L3 responsibilities for ETL processes. Your responsibilities will include delivering key Azure Data Lake projects within the specified time and budget. You will contribute to solution design and build to ensure scalability, performance, and reuse of data and other components. It is essential to ensure on-time and on-budget delivery that meets project requirements while following enterprise architecture standards. Strong problem-solving abilities are required, focusing on managing business outcomes through collaboration with various internal and external stakeholders. You should be enthusiastic, willing to learn, and continuously develop skills and techniques, embracing change and seeking continuous improvement. Effective communication, both written and verbal, with good presentational skills in the English language is necessary. Being customer-focused and a team player is also important. Qualifications: - Bachelor's degree in computer science, MIS, Business Management, or related field - Minimum 5 years of experience in Information Technology - Minimum 4 years of experience in Azure Data Lake Technical Skills: - Proven experience in development activities in Data, BI, or Analytics projects - Experience in solutions delivery with knowledge of system development lifecycle, integration, and sustainability - Strong knowledge of Pyspark and SQL - Good understanding of Azure Data Factory or Databricks - Desirable knowledge of Presto/Denodo - Desirable knowledge of FMCG business processes Non-Technical Skills: - Excellent remote collaboration skills - Experience working in a matrix organization with diverse priorities - Exceptional written and verbal communication skills, collaboration, and listening skills - Ability to work with agile delivery methodologies - Ability to ideate requirements and design iteratively with business partners without formal requirements documentation,
Posted 2 weeks ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
40005 Jobs | Dublin
Wipro
19416 Jobs | Bengaluru
Accenture in India
16187 Jobs | Dublin 2
EY
15356 Jobs | London
Uplers
11435 Jobs | Ahmedabad
Amazon
10613 Jobs | Seattle,WA
Oracle
9462 Jobs | Redwood City
IBM
9313 Jobs | Armonk
Accenture services Pvt Ltd
8087 Jobs |
Capgemini
7830 Jobs | Paris,France