Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
6.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Description Enphase Energy is a global energy technology company and a leading provider of solar, battery, and electric vehicle charging products. Founded in 2006, our innovative microinverter technology revolutionized solar power, making it a safer, more reliable, and scalable energy source. Today, the Enphase Energy System enables users to make, use, save, and sell their own power. Enphase is also one of the most successful and innovative clean energy companies in the world, with more than 80 million products shipped across 160 countries. Join our dynamic teams designing and developing next-gen energy technologies and help drive a sustainable future! About The Role The Sr. Data Scientist will be responsible for analyzing product performance in the fleet. Provides support for the data management activities of the Quality/Customer Service organization. Collaborates with Engineering/Quality/CS teams and Information Technology. What You Will be doing Strong understanding of industrial processes, sensor data, and IoT platforms, essential for building effective predictive maintenance models Experience translating theoretical concepts into engineered features, with a demonstrated ability to create features capturing important events or transitions within the data Expertise in crafting custom features that highlight unique patterns specific to the dataset or problem, enhancing model predictive power. Ability to combine and synthesize information from multiple data sources to develop more informative features Advanced knowledge in Apache Spark (PySpark, SparkSQL, SparkR) and distributed computing, demonstrated through efficient processing and analysis of large-scale datasets. Proficiency in Python, R, and SQL, with a proven track record of writing optimized and efficient Spark code for data processing and model training Hands-on experience with cloud-based machine learning platforms such as AWS SageMaker and Databricks, showcasing scalable model development and deployment Demonstrated capability to develop and implement custom statistical algorithms tailored to specific anomaly detection tasks Proficiency in statistical methods for identifying patterns and trends in large datasets, essential for predictive maintenance. Demonstrated expertise in engineering features to highlight deviations or faults for early detection. Proven leadership in managing predictive maintenance projects from conception to deployment, with a successful track record of cross-functional team collaboration Experience extracting temporal features, such as trends, seasonality, and lagged values, to improve model accuracy. Skills in filtering, smoothing, and transforming data for noise reduction and effective feature extraction Experience optimizing code for performance in high-throughput, low-latency environments. Experience deploying models into production, with expertise in monitoring their performance and integrating them with CI/CD pipelines using AWS, Docker, or Kubernetes Familiarity with end-to-end analytical architectures, including data lakes, data warehouses, and real-time processing systems Experience creating insightful dashboards and reports using tools such as Power BI, Tableau, or custom visualization frameworks to effectively communicate model results to stakeholders 6+ years of experience in data science with a significant focus on predictive maintenance and anomaly detection Who You Are And What You Bring Bachelor’s or Master’s degree/ Diploma in Engineering, Statistics, Mathematics or Computer Science 6+ years of experience as a Data Scientist Strong problem-solving skills Proven ability to work independently and accurately
Posted 1 week ago
5.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
This role is for one of Weekday's clients Min Experience: 5 years Location: Bengaluru JobType: full-time Requirements We are looking for an experienced Data Scientist with a strong background in the CPG (Consumer Packaged Goods) or Retail domain , focusing on category and product analytics , forecasting , and machine learning workflows . The ideal candidate will possess advanced analytical skills, business acumen, and hands-on expertise in modern data science tools and platforms such as Python, SQL, Databricks, PySpark , and CI/CD ML pipelines . As a Data Scientist, you will be responsible for generating actionable insights across product assortment, category performance, sales trends, and customer behaviors. Your work will directly influence decision-making for new product launches , inventory optimization , campaign effectiveness , and category planning , enabling our teams to enhance operational efficiency and drive business growth. Key Responsibilities Category & Product Analytics: Conduct deep-dive analysis into product assortment, SKU performance, pricing effectiveness, and category trends. Evaluate new product launches and provide recommendations for optimization based on early performance indicators. Sales Data Analysis & Forecasting: Analyze historical and real-time sales data to identify key growth drivers, seasonality, and demand patterns. Build statistical and ML-based models to forecast demand and category-level performance at multiple aggregation levels. Customer Analytics (Nice to Have): Analyze loyalty program data and campaign performance metrics to assess customer retention and ROI of promotions. ML Model Development & Deployment: Design, build, and deploy machine learning models using Python and PySpark to address business problems in forecasting, product clustering, and sales optimization. Maintain and scale CI/CD pipelines for ML workflows using tools like MLflow, Azure ML, or similar. Data Engineering and Tooling: Develop and optimize data pipelines on Databricks and ensure reliable data ingestion and transformation for analytics use cases. Use SQL and PySpark to manipulate and analyze large datasets with performance and scalability in mind. Visualization & Stakeholder Communication: Build impactful dashboards using Power BI (preferred) to enable self-service analytics for cross-functional teams. Translate data insights into clear, compelling business narratives for leadership and non-technical stakeholders. Collaboration & Strategic Insights: Work closely with category managers, marketing, and supply chain teams to align data science initiatives with key business objectives. Proactively identify opportunities for innovation and efficiency across product and sales functions. Required Skills & Qualifications Bachelor's or Master's degree in Data Science, Statistics, Computer Science, or a related quantitative field. 5+ years of experience in applied data science, preferably in CPG/Retail/FMCG domains. Proficient in Python, SQL, Databricks, and MLflow. Experience with PySpark and Azure ML is a strong plus. Deep experience with time-series forecasting, product affinity modeling, and campaign analytics. Familiarity with Power BI for dashboarding and visualization. Strong storytelling skills, with the ability to explain complex data-driven insights to senior stakeholders. Solid understanding of challenges and opportunities within the retail and FMCG space. Ability to work independently as well as in cross-functional teams in a fast-paced environment.
Posted 1 week ago
4.0 years
18 - 20 Lacs
Bengaluru, Karnataka, India
On-site
This role is for one of Weekday's clients Salary range: Rs 1800000 - Rs 2000000 (ie INR 18-20 LPA) Min Experience: 4 years Location: Bangalore JobType: full-time Requirements We are seeking an experienced and detail-oriented Data Analyst with a strong background in SQL, PySpark, Python, and Power BI (PBI) to join our data and analytics team. As a Data Analyst, you will play a critical role in transforming raw data into actionable insights that drive strategic business decisions. You'll work closely with cross-functional teams including business, product, engineering, and marketing to understand data requirements, build robust data models, and deliver meaningful reports and dashboards. The ideal candidate has 4+ years of hands-on experience working in fast-paced, data-driven environments, with a strong command of data querying, scripting, and visualization. This is an excellent opportunity for someone who enjoys solving complex data problems and communicating insights to both technical and non-technical stakeholders. Key Responsibilities Data Extraction & Transformation: Use SQL and PySpark to extract, clean, transform, and aggregate large datasets from structured and unstructured sources. Data Analysis: Conduct exploratory and ad-hoc data analysis using Python and other statistical tools to identify trends, anomalies, and business opportunities. Dashboarding & Reporting: Design, develop, and maintain interactive dashboards and reports using Power BI to visualize KPIs, business metrics, and forecasts. Data Modeling: Build and maintain efficient and scalable data models to support reporting and analytics use cases. Business Collaboration: Partner with internal teams to gather requirements, understand business challenges, and deliver data-driven solutions. Performance Tracking: Monitor campaign and business performance, identify areas of improvement, and suggest data-backed strategies. Automation: Streamline and automate recurring reporting processes using Python scripting and PBI integrations. Data Governance: Ensure data accuracy, consistency, and compliance with privacy regulations and data governance frameworks. Documentation: Maintain comprehensive documentation of data workflows, pipelines, and dashboards for knowledge transfer and reproducibility. Required Skills And Qualifications Bachelor's or Master's degree in Computer Science, Data Science, Mathematics, Statistics, or a related field. 4+ years of professional experience as a Data Analyst or in a similar role involving large-scale data analysis. Strong expertise in SQL for data querying, joins, aggregations, and optimization techniques. Hands-on experience with PySpark for big data processing and distributed computing. Proficiency in Python for data manipulation, statistical analysis, and building automation scripts. Advanced working knowledge of Power BI for building reports, dashboards, and performing DAX calculations. Strong analytical thinking, with the ability to work independently and manage multiple projects simultaneously. Excellent communication and stakeholder management skills; ability to translate complex data into simple business insights. Familiarity with cloud platforms (Azure/AWS/GCP), data warehouses (Snowflake, Redshift, BigQuery), and version control tools (Git) is a plus
Posted 1 week ago
0 years
0 Lacs
Pune, Maharashtra, India
On-site
Line of Service Advisory Industry/Sector Not Applicable Specialism Data, Analytics & AI Management Level Senior Associate Job Description & Summary At PwC, our people in business application consulting specialise in consulting services for a variety of business applications, helping clients optimise operational efficiency. These individuals analyse client needs, implement software solutions, and provide training and support for seamless integration and utilisation of business applications, enabling clients to achieve their strategic objectives. As a business application consulting generalist at PwC, you will provide consulting services for a wide range of business applications. You will leverage a broad understanding of various software solutions to assist clients in optimising operational efficiency through analysis, implementation, training, and support. *Why PWC At PwC, you will be part of a vibrant community of solvers that leads with trust and creates distinctive outcomes for our clients and communities. This purpose-led and values-driven work, powered by technology in an environment that drives innovation, will enable you to make a tangible impact in the real world. We reward your contributions, support your wellbeing, and offer inclusive benefits, flexibility programmes and mentorship that will help you thrive in work and life. Together, we grow, learn, care, collaborate, and create a future of infinite experiences for each other. Learn more about us . At PwC, we believe in providing equal employment opportunities, without any discrimination on the grounds of gender, ethnic background, age, disability, marital status, sexual orientation, pregnancy, gender identity or expression, religion or other beliefs, perceived differences and status protected by law. We strive to create an environment where each one of our people can bring their true selves and contribute to their personal growth and the firm’s growth. To enable this, we have zero tolerance for any discrimination and harassment based on the above considerations. " Responsibilities: Design and build data pipelines & Data lakes to automate ingestion of structured and unstructured data that provide fast, optimized, and robust end-to-end solutions Knowledge about the concepts of data lake and dat warehouse Experience working with AWS big data technologies Improve the data quality and reliability of data pipelines through monitoring, validation and failure detection. Deploy and configure components to production environments Technology: Redshift, S3, AWS Glue, Lambda, SQL, PySpark, SQL Mandatory skill sets: AWS Data Engineer Preferred skill sets: AWS Data Engineer Years of experience required: 4-8 Education qualification: Btech/MBA/MCA Education (if blank, degree and/or field of study not specified) Degrees/Field of Study required: Bachelor of Engineering, Master of Business Administration Degrees/Field of Study preferred: Certifications (if blank, certifications not specified) Required Skills AWS Devops Optional Skills Accepting Feedback, Accepting Feedback, Active Listening, Analytical Reasoning, Analytical Thinking, Application Software, Business Data Analytics, Business Management, Business Technology, Business Transformation, Communication, Creativity, Documentation Development, Embracing Change, Emotional Regulation, Empathy, Implementation Research, Implementation Support, Implementing Technology, Inclusion, Intellectual Curiosity, Learning Agility, Optimism, Performance Assessment, Performance Management Software {+ 16 more} Desired Languages (If blank, desired languages not specified) Travel Requirements Not Specified Available for Work Visa Sponsorship? No Government Clearance Required? No Job Posting End Date
Posted 1 week ago
5.0 - 8.0 years
0 Lacs
Pune, Maharashtra, India
On-site
Job description: Job Description Role Purpose The purpose of this role is to design, test and maintain software programs for operating systems or applications which needs to be deployed at a client end and ensure its meet 100% quality assurance parameters ͏ Do 1. Instrumental in understanding the requirements and design of the product/ software Develop software solutions by studying information needs, studying systems flow, data usage and work processes Investigating problem areas followed by the software development life cycle Facilitate root cause analysis of the system issues and problem statement Identify ideas to improve system performance and impact availability Analyze client requirements and convert requirements to feasible design Collaborate with functional teams or systems analysts who carry out the detailed investigation into software requirements Conferring with project managers to obtain information on software capabilities ͏ 2. Perform coding and ensure optimal software/ module development Determine operational feasibility by evaluating analysis, problem definition, requirements, software development and proposed software Develop and automate processes for software validation by setting up and designing test cases/scenarios/usage cases, and executing these cases Modifying software to fix errors, adapt it to new hardware, improve its performance, or upgrade interfaces. Analyzing information to recommend and plan the installation of new systems or modifications of an existing system Ensuring that code is error free or has no bugs and test failure Preparing reports on programming project specifications, activities and status Ensure all the codes are raised as per the norm defined for project / program / account with clear description and replication patterns Compile timely, comprehensive and accurate documentation and reports as requested Coordinating with the team on daily project status and progress and documenting it Providing feedback on usability and serviceability, trace the result to quality risk and report it to concerned stakeholders ͏ 3. Status Reporting and Customer Focus on an ongoing basis with respect to project and its execution Capturing all the requirements and clarifications from the client for better quality work Taking feedback on the regular basis to ensure smooth and on time delivery Participating in continuing education and training to remain current on best practices, learn new programming languages, and better assist other team members. Consulting with engineering staff to evaluate software-hardware interfaces and develop specifications and performance requirements Document and demonstrate solutions by developing documentation, flowcharts, layouts, diagrams, charts, code comments and clear code Documenting very necessary details and reports in a formal way for proper understanding of software from client proposal to implementation Ensure good quality of interaction with customer w.r.t. e-mail content, fault report tracking, voice calls, business etiquette etc Timely Response to customer requests and no instances of complaints either internally or externally ͏ Deliver No. Performance Parameter Measure 1.Continuous Integration, Deployment & Monitoring of Software100% error free on boarding & implementation, throughput %, Adherence to the schedule/ release plan2.Quality & CSATOn-Time Delivery, Manage software, Troubleshoot queries, Customer experience, completion of assigned certifications for skill upgradation3.MIS & Reporting100% on time MIS & report generation Mandatory Skills: PySpark . Experience: 5-8 Years . Reinvent your world. We are building a modern Wipro. We are an end-to-end digital transformation partner with the boldest ambitions. To realize them, we need people inspired by reinvention. Of yourself, your career, and your skills. We want to see the constant evolution of our business and our industry. It has always been in our DNA - as the world around us changes, so do we. Join a business powered by purpose and a place that empowers you to design your own reinvention. Come to Wipro. Realize your ambitions. Applications from people with disabilities are explicitly welcome.
Posted 1 week ago
6.0 - 11.0 years
25 - 35 Lacs
Gurugram, Chennai, Bengaluru
Hybrid
Greetings from BCforward INDIA TECHNOLOGIES PRIVATE LIMITED. Contract To Hire(C2H) Role Location: Bengaluru,IN; Gurgaon,IN; Chennai,IN Payroll: BCforward Work Mode: Hybrid JD GCP; PySpark; ETL - Big Data / Data Warehousing; SQL; Python Experienced data engineer with hands on experience on GCP offerings Experienced in BigQuery/ BigTable/ Pyspark Worked on prior data engineering projects leveraging GCP product offerings Strong SQL background Prior Bigdata experience Please share your Updated Resume, PAN card soft copy, Passport size Photo & UAN History. Interested applicants can share updated resume to g.sreekanth@bcforward.com Note: Looking for Immediate to 30-Days joiners at most. All the best
Posted 1 week ago
3.0 - 5.0 years
5 - 12 Lacs
Hyderabad, Chennai
Work from Office
Greetings !!! Hiring for GCP Data Engineers for Chennai/Hyderabad location. Skills - GCP , Python Pyspark , Python , Airflow , SQL. Location - Chennai/Hyderabad (WFO) Experience - 3 to 5 years Interested one can share their resumes to anmol.bhatia@incedoinc.com
Posted 1 week ago
0 years
0 Lacs
Kolkata, West Bengal, India
On-site
Introduction In this role, you'll work in one of our IBM Consulting Client Innovation Centers (Delivery Centers), where we deliver deep technical and industry expertise to a wide range of public and private sector clients around the world. Our delivery centers offer our clients locally based skills and technical expertise to drive innovation and adoption of new technology Your Role And Responsibilities Scale an existing RAG code base for a production grade AI application Requirements: Proficiency in Prompt Engineering, LLMs, and Retrieval Augmented Generation Programming languages like Python or Java Experience with vector databases Experience using LLMs in software applications, including prompting, calling, and processing outputs Experience with AI frameworks such as LangChain Troubleshooting skills and creative in finding new ways to leverage LLM Experience with Azure Proof of Concept (POC) Development: Develop POCs to validate and showcase the feasibility and effectiveness of the proposed AI solutions. Collaborate with development teams to implement and iterate on POCs, ensuring alignment with customer requirements and expectations. Help in showcasing the ability of Gen AI code assistant to refactor/rewrite and document code from one language to another, particularly COBOL to JAVA through rapid prototypes/ PoC Documentation and Knowledge Sharing: Document solution architectures, design decisions, implementation details, and lessons learned. Create technical documentation, white papers, and best practice guides. Contribute to internal knowledge sharing initiatives and mentor new team members. Industry Trends and Innovation: Stay up to date with the latest trends and advancements in AI, foundation models, and large language models. Evaluate emerging technologies, tools, and frameworks to assess their potential impact on solution design and implementation. Experience in python and pyspark will be added advantage Preferred Education Master's Degree Required Technical And Professional Expertise Strong programming skills, with proficiency in Python and experience with AI frameworks such as TensorFlow, PyTorch, Keras or Hugging Face. Understanding in the usage of libraries such as SciKit Learn, Pandas, Matplotlib, etc. Familiarity with cloud platforms (e.g. Kubernetes, AWS, Azure, GCP) and related services is a plus. Experience and working knowledge in COBOL & JAVA would be preferred Having experience in Code generation, code matching & code translation Prepare the effort estimates, WBS, staffing plan, RACI, RAID etc. . Excellent interpersonal and communication skills. Engage with stakeholders for analysis and implementation. Commitment to continuous learning and staying updated with advancements in the field of AI. Demonstrate a growth mindset to understand clients' business processes and challenges Preferred Technical And Professional Experience Pyspark, SQL, Python Job Description : Classical ML Model (either machine learning or deep learning) building experience with feature creation (not PoC experience)" Ability to communicate with internal and external clients to understand and define business needs, providing analytical solutions Ability to communicate results to technical and non-technical audiences
Posted 1 week ago
5.0 - 10.0 years
15 - 25 Lacs
Hyderabad/Secunderabad, Bangalore/Bengaluru, Delhi / NCR
Hybrid
Genpact (NYSE: G) is a global professional services and solutions firm delivering outcomes that shape the future. Our 125,000+ people across 30+ countries are driven by our innate curiosity, entrepreneurial agility, and desire to create lasting value for clients. Powered by our purpose the relentless pursuit of a world that works better for people – we serve and transform leading enterprises, including the Fortune Global 500, with our deep business and industry knowledge, digital operations services, and expertise in data, technology, and AI. Inviting applications for the role of Lead Consultant-Data Engineer, AWS+Python, Spark, Kafka for ETL! Responsibilities Develop, deploy, and manage ETL pipelines using AWS services, Python, Spark, and Kafka. Integrate structured and unstructured data from various data sources into data lakes and data warehouses. Design and deploy scalable, highly available, and fault-tolerant AWS data processes using AWS data services (Glue, Lambda, Step, Redshift) Monitor and optimize the performance of cloud resources to ensure efficient utilization and cost-effectiveness. Implement and maintain security measures to protect data and systems within the AWS environment, including IAM policies, security groups, and encryption mechanisms. Migrate the application data from legacy databases to Cloud based solutions (Redshift, DynamoDB, etc) for high availability with low cost Develop application programs using Big Data technologies like Apache Hadoop, Apache Spark, etc with appropriate cloud-based services like Amazon AWS, etc. Build data pipelines by building ETL processes (Extract-Transform-Load) Implement backup, disaster recovery, and business continuity strategies for cloud-based applications and data. Responsible for analysing business and functional requirements which involves a review of existing system configurations and operating methodologies as well as understanding evolving business needs Analyse requirements/User stories at the business meetings and strategize the impact of requirements on different platforms/applications, convert the business requirements into technical requirements Participating in design reviews to provide input on functional requirements, product designs, schedules and/or potential problems Understand current application infrastructure and suggest Cloud based solutions which reduces operational cost, requires minimal maintenance but provides high availability with improved security Perform unit testing on the modified software to ensure that the new functionality is working as expected while existing functionalities continue to work in the same way Coordinate with release management, other supporting teams to deploy changes in production environment Qualifications we seek in you! Minimum Qualifications Experience in designing, implementing data pipelines, build data applications, data migration on AWS Strong experience of implementing data lake using AWS services like Glue, Lambda, Step, Redshift Experience of Databricks will be added advantage Strong experience in Python and SQL Proven expertise in AWS services such as S3, Lambda, Glue, EMR, and Redshift. Advanced programming skills in Python for data processing and automation. Hands-on experience with Apache Spark for large-scale data processing. Experience with Apache Kafka for real-time data streaming and event processing. Proficiency in SQL for data querying and transformation. Strong understanding of security principles and best practices for cloud-based environments. Experience with monitoring tools and implementing proactive measures to ensure system availability and performance. Excellent problem-solving skills and ability to troubleshoot complex issues in a distributed, cloud-based environment. Strong communication and collaboration skills to work effectively with cross-functional teams. Preferred Qualifications/ Skills Master’s Degree-Computer Science, Electronics, Electrical. AWS Data Engineering & Cloud certifications, Databricks certifications Experience with multiple data integration technologies and cloud platforms Knowledge of Change & Incident Management process Genpact is an Equal Opportunity Employer and considers applicants for all positions without regard to race, color, religion or belief, sex, age, national origin, citizenship status, marital status, military/veteran status, genetic information, sexual orientation, gender identity, physical or mental disability or any other characteristic protected by applicable laws. Genpact is committed to creating a dynamic work environment that values diversity and inclusion, respect and integrity, customer focus, and innovation. Get to know us at genpact.com and on LinkedIn, X, YouTube, and Facebook. Furthermore, please do note that Genpact does not charge fees to process job applications and applicants are not required to pay to participate in our hiring process in any other way. Examples of such scams include purchasing a 'starter kit,' paying to apply, or purchasing equipment or training.
Posted 1 week ago
9.0 - 12.0 years
14 - 24 Lacs
Gurugram
Remote
We are looking for an experienced Senior Data Engineer to lead the development of scalable AWS-native data lake pipelines with a strong focus on time series forecasting and upsert-ready architectures. This role requires end-to-end ownership of the data lifecycle, from ingestion to partitioning, versioning, and BI delivery. The ideal candidate must be highly proficient in AWS data services, PySpark, versioned storage formats like Apache Hudi/Iceberg, and must understand the nuances of data quality and observability in large-scale analytics systems. Role & responsibilities Design and implement data lake zoning (Raw Clean Modeled) using Amazon S3, AWS Glue, and Athena. Ingest structured and unstructured datasets including POS, USDA, Circana, and internal sales data. Build versioned and upsert-friendly ETL pipelines using Apache Hudi or Iceberg. Create forecast-ready datasets with lagged, rolling, and trend features for revenue and occupancy modelling. Optimize Athena datasets with partitioning, CTAS queries, and metadata tagging. Implement S3 lifecycle policies, intelligent file partitioning, and audit logging. Build reusable transformation logic using dbt-core or PySpark to support KPIs and time series outputs. Integrate robust data quality checks using custom logs, AWS CloudWatch, or other DQ tooling. Design and manage a forecast feature registry with metrics versioning and traceability. Collaborate with BI and business teams to finalize schema design and deliverables for dashboard consumption. Preferred candidate profile 9-12 years of experience in data engineering. Deep hands-on experience with AWS Glue, Athena, S3, Step Functions, and Glue Data Catalog. Strong command over PySpark, dbt-core, CTAS query optimization, and partition strategies. Working knowledge of Apache Hudi, Iceberg, or Delta Lake for versioned ingestion. Experience in S3 metadata tagging and scalable data lake design patterns. Expertise in feature engineering and forecasting dataset preparation (lags, trends, windows). Proficiency in Git-based workflows (Bitbucket), CI/CD, and deployment automation. Strong understanding of time series KPIs, such as revenue forecasts, occupancy trends, or demand volatility. Data observability best practices including field-level logging, anomaly alerts, and classification tagging. Experience with statistical forecasting frameworks such as Prophet, GluonTS, or related libraries. Familiarity with Superset or Streamlit for QA visualization and UAT reporting. Understanding of macroeconomic datasets (USDA, Circana) and third-party data ingestion. Independent, critical thinker with the ability to design for scale and evolving business logic. Strong communication and collaboration with BI, QA, and business stakeholders. High attention to detail in ensuring data accuracy, quality, and documentation. Comfortable interpreting business-level KPIs and transforming them into technical pipelines.
Posted 1 week ago
5.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
We're seeking a talented and experienced Big Data & AI Specialist to join our growing team. The ideal candidate will have a strong background in Python and PySpark, with a proven ability to work with large datasets and implement robust data solutions. Familiarity with Artificial Intelligence (AI) concepts and their application in real-world scenarios is also highly valued. Responsibilities: Design, develop, and maintain scalable and efficient data pipelines using Python and PySpark for batch and real-time processing of large datasets. Implement and optimize data ingestion, transformation, and loading processes within a big data ecosystem. Collaborate with data scientists and other stakeholders to understand data requirements and translate them into technical solutions. Develop and deploy machine learning models and AI-driven solutions, leveraging your familiarity with AI concepts. Ensure data quality, integrity, and security across all data solutions. Troubleshoot and resolve performance issues and data-related problems. Stay up-to-date with the latest trends and technologies in big data, AI, and cloud platforms. Participate in code reviews and contribute to the overall technical excellence of the team. Qualifications: Bachelor's or Master's degree in Computer Science, Data Science, Engineering, or a related field. 5+ years of professional experience in big data technologies. Expert-level proficiency in Python for data manipulation, scripting, and application development. Strong hands-on experience with PySpark for big data processing and analysis. Experience with big data frameworks and tools such as Hadoop, Spark, Hive, Kafka, etc. Familiarity with Artificial Intelligence (AI) concepts , including machine learning algorithms, deep learning, natural language processing (NLP), or computer vision. Experience with cloud platforms (AWS, Azure, GCP) and their big data/AI services is a plus. Solid understanding of data warehousing concepts and ETL/ELT processes. Excellent problem-solving, analytical, and communication skills. Ability to work independently and as part of a collaborative team.
Posted 1 week ago
3.0 - 6.0 years
4 - 9 Lacs
Bengaluru
Hybrid
Description - Internal BI & Data Analyst - Power BI Development Experience - Experience wrt to Tableau to Power BI Migration - Experience wrt to Performance Tuning of Dashboard - ETL Tool exposure ( ADF ) for creating the data pipeline Positions in this function are responsible for the management and manipulation of mostly structured data, with a focus on building business intelligence tools, conducting analysis to distinguish patterns and recognize trends, performing normalization operations and assuring data quality. Depending on the specific role and business line, example responsibilities in this function could include creating specifications to bring data into a common structure, creating product specifications and models, developing data solutions to support analyses, performing analysis, interpreting results, developing actionable insights and presenting recommendations for use across the company. Roles in this function could partner with stakeholders to understand data requirements and develop tools and models such as segmentation, dashboards, data visualizations, decision aids and business case analysis to support the organization. Other roles involved could include producing and managing the delivery of activity and value analytics to external stakeholders and clients. Team members will typically use business intelligence, data visualization, query, analytic and statistical software to build solutions, perform analysis and interpret data. Positions in this function work on predominately descriptive and regression-based analytics and tend to leverage subject matter expert views in the design of their analytics and algorithms. This function is not intended for employees performing the following work: production of standard or self-service operational reporting, casual inference led (healthcare analytics) or data pattern recognition (data science) analysis; and/or image or unstructured data analysis using sophisticated theoretical frameworks. - Analyzes and investigates. - Provides explanations and interpretations within area of expertise. Qualifications - Internal - Undergraduate degree or equivalent experience. BE / BTech - Computer Science - Careers with Optum. Here's the idea. We built an entire organization around one giant objective; make the health system work better for everyone. So when it comes to how we use the world's large accumulation of health-related information, or guide health and lifestyle choices or manage pharmacy benefits for millions, our first goal is to leap beyond the status quo and uncover new ways to serve. Optum, part of the UnitedHealth Group family of businesses, brings together some of the greatest minds and most advanced ideas on where health care has to go in order to reach its fullest potential. For you, that means working on high performance teams against sophisticated challenges that matter. Optum, incredible ideas in one incredible company and a singular opportunity to do your life's best work.SM
Posted 1 week ago
6.0 - 10.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Summary Position Summary AI & Data In this age of disruption, organizations need to navigate the future with confidence, embracing decision making with clear, data-driven choices that deliver enterprise value in a dynamic business environment. The AI & Data team leverages the power of data, analytics, robotics, science and cognitive technologies to uncover hidden relationships from vast troves of data, generate insights, and inform decision-making. The offering portfolio helps clients transform their business by architecting organizational intelligence programs and differentiated strategies to win in their chosen markets. Job Title: Senior Data Scientist/Team Lead Job Summary: We are seeking a Senior Data Scientist with hand-on experience in leveraging data, machine learning, statistics and AI technologies to generate insights and inform decision-making. You will work on large-scale data ecosystems and lead a team to implement data-driven solutions. Key Responsibilities: Lead and deliver large-scale DS/ML end to end projects across multiple industries and domains Liaison with on-site and client teams to understand various business problem statements, use cases and project requirements Lead a team of Data Engineers, ML/AI Engineers, DevOps, and other Data & AI professionals to deliver projects from inception to implementation Utilize maths/stats, AI, and cognitive techniques to analyze and process data, predict scenarios, and prescribe actions. Assist and participate in pre-sales, client pursuits and proposals Drive a human-led culture of Inclusion & Diversity by caring deeply for all team members Qualifications: 6-10 years of relevant hands-on experience in Data Science, Machine Learning, Statistical Modeling Bachelor’s or Master’s degree in a quantitative field Led a 3-5 member team on multiple end to end DS/ML projects Excellent communication and client/stakeholder management skills Must have strong hands-on experience with programming languages like Python, PySpark and SQL, and frameworks such as Numpy, Pandas, Scikit-learn, etc. Expertise in Classification, Regression, Time series, Decision Trees, Optimization, etc. Hands on knowledge of Docker containerization, GIT, Tableau or PowerBI Model deployment on Cloud or On-prem will be an added advantage Familiar with Databricks, Snowflake, or Hyperscalers (AWS/Azure/GCP/NVIDIA) Should follow research papers, comprehend and innovate/present the best approaches/solutions related to DS/ML AI/Cloud certification from a premier institute is preferred. #GenAI&ML Our purpose Deloitte’s purpose is to make an impact that matters for our people, clients, and communities. At Deloitte, purpose is synonymous with how we work every day. It defines who we are. Our purpose comes through in our work with clients that enables impact and value in their organizations, as well as through our own investments, commitments, and actions across areas that help drive positive outcomes for our communities. Our people and culture Our inclusive culture empowers our people to be who they are, contribute their unique perspectives, and make a difference individually and collectively. It enables us to leverage different ideas and perspectives, and bring more creativity and innovation to help solve our clients' most complex challenges. This makes Deloitte one of the most rewarding places to work. Professional development At Deloitte, professionals have the opportunity to work with some of the best and discover what works best for them. Here, we prioritize professional growth, offering diverse learning and networking opportunities to help accelerate careers and enhance leadership skills. Our state-of-the-art DU: The Leadership Center in India, located in Hyderabad, represents a tangible symbol of our commitment to the holistic growth and development of our people. Explore DU: The Leadership Center in India . Benefits To Help You Thrive At Deloitte, we know that great people make a great organization. Our comprehensive rewards program helps us deliver a distinctly Deloitte experience that helps that empowers our professionals to thrive mentally, physically, and financially—and live their purpose. To support our professionals and their loved ones, we offer a broad range of benefits. Eligibility requirements may be based on role, tenure, type of employment and/ or other criteria. Learn more about what working at Deloitte can mean for you. Recruiting tips From developing a stand out resume to putting your best foot forward in the interview, we want you to feel prepared and confident as you explore opportunities at Deloitte. Check out recruiting tips from Deloitte recruiters. Requisition code: 300022
Posted 1 week ago
7.0 years
8 - 9 Lacs
Thiruvananthapuram
On-site
7 - 9 Years 4 Openings Trivandrum Role description Role Proficiency: This role requires proficiency in developing data pipelines including coding and testing for ingesting wrangling transforming and joining data from various sources. The ideal candidate should be adept in ETL tools like Informatica Glue Databricks and DataProc with strong coding skills in Python PySpark and SQL. This position demands independence and proficiency across various data domains. Expertise in data warehousing solutions such as Snowflake BigQuery Lakehouse and Delta Lake is essential including the ability to calculate processing costs and address performance issues. A solid understanding of DevOps and infrastructure needs is also required. Outcomes: Act creatively to develop pipelines/applications by selecting appropriate technical options optimizing application development maintenance and performance through design patterns and reusing proven solutions. Support the Project Manager in day-to-day project execution and account for the developmental activities of others. Interpret requirements create optimal architecture and design solutions in accordance with specifications. Document and communicate milestones/stages for end-to-end delivery. Code using best standards debug and test solutions to ensure best-in-class quality. Tune performance of code and align it with the appropriate infrastructure understanding cost implications of licenses and infrastructure. Create data schemas and models effectively. Develop and manage data storage solutions including relational databases NoSQL databases Delta Lakes and data lakes. Validate results with user representatives integrating the overall solution. Influence and enhance customer satisfaction and employee engagement within project teams. Measures of Outcomes: TeamOne's Adherence to engineering processes and standards TeamOne's Adherence to schedule / timelines TeamOne's Adhere to SLAs where applicable TeamOne's # of defects post delivery TeamOne's # of non-compliance issues TeamOne's Reduction of reoccurrence of known defects TeamOne's Quickly turnaround production bugs Completion of applicable technical/domain certifications Completion of all mandatory training requirementst Efficiency improvements in data pipelines (e.g. reduced resource consumption faster run times). TeamOne's Average time to detect respond to and resolve pipeline failures or data issues. TeamOne's Number of data security incidents or compliance breaches. Outputs Expected: Code: Develop data processing code with guidance ensuring performance and scalability requirements are met. Define coding standards templates and checklists. Review code for team and peers. Documentation: Create/review templates checklists guidelines and standards for design/process/development. Create/review deliverable documents including design documents architecture documents infra costing business requirements source-target mappings test cases and results. Configure: Define and govern the configuration management plan. Ensure compliance from the team. Test: Review/create unit test cases scenarios and execution. Review test plans and strategies created by the testing team. Provide clarifications to the testing team. Domain Relevance: Advise data engineers on the design and development of features and components leveraging a deeper understanding of business needs. Learn more about the customer domain and identify opportunities to add value. Complete relevant domain certifications. Manage Project: Support the Project Manager with project inputs. Provide inputs on project plans or sprints as needed. Manage the delivery of modules. Manage Defects: Perform defect root cause analysis (RCA) and mitigation. Identify defect trends and implement proactive measures to improve quality. Estimate: Create and provide input for effort and size estimation and plan resources for projects. Manage Knowledge: Consume and contribute to project-related documents SharePoint libraries and client universities. Review reusable documents created by the team. Release: Execute and monitor the release process. Design: Contribute to the creation of design (HLD LLD SAD)/architecture for applications business components and data models. Interface with Customer: Clarify requirements and provide guidance to the Development Team. Present design options to customers. Conduct product demos. Collaborate closely with customer architects to finalize designs. Manage Team: Set FAST goals and provide feedback. Understand team members' aspirations and provide guidance and opportunities. Ensure team members are upskilled. Engage the team in projects. Proactively identify attrition risks and collaborate with BSE on retention measures. Certifications: Obtain relevant domain and technology certifications. Skill Examples: Proficiency in SQL Python or other programming languages used for data manipulation. Experience with ETL tools such as Apache Airflow Talend Informatica AWS Glue Dataproc and Azure ADF. Hands-on experience with cloud platforms like AWS Azure or Google Cloud particularly with data-related services (e.g. AWS Glue BigQuery). Conduct tests on data pipelines and evaluate results against data quality and performance specifications. Experience in performance tuning. Experience in data warehouse design and cost improvements. Apply and optimize data models for efficient storage retrieval and processing of large datasets. Communicate and explain design/development aspects to customers. Estimate time and resource requirements for developing/debugging features/components. Participate in RFP responses and solutioning. Mentor team members and guide them in relevant upskilling and certification. Knowledge Examples: Knowledge Examples Knowledge of various ETL services used by cloud providers including Apache PySpark AWS Glue GCP DataProc/Dataflow Azure ADF and ADLF. Proficient in SQL for analytics and windowing functions. Understanding of data schemas and models. Familiarity with domain-related data. Knowledge of data warehouse optimization techniques. Understanding of data security concepts. Awareness of patterns frameworks and automation practices. Additional Comments: We are seeking a highly experienced Senior Data Engineer to design, develop, and optimize scalable data pipelines in a cloud-based environment. The ideal candidate will have deep expertise in PySpark, SQL, Azure Databricks, and experience with either AWS or GCP. A strong foundation in data warehousing, ELT/ETL processes, and dimensional modeling (Kimball/star schema) is essential for this role. Must-Have Skills 8+ years of hands-on experience in data engineering or big data development. Strong proficiency in PySpark and SQL for data transformation and pipeline development. Experience working in Azure Databricks or equivalent Spark-based cloud platforms. Practical knowledge of cloud data environments – Azure, AWS, or GCP. Solid understanding of data warehousing concepts, including Kimball methodology and star/snowflake schema design. Proven experience designing and maintaining ETL/ELT pipelines in production. Familiarity with version control (e.g., Git), CI/CD practices, and data pipeline orchestration tools (e.g., Airflow, Azure Data Factory Skills Azure Data Factory,Azure Databricks,Pyspark,Sql About UST UST is a global digital transformation solutions provider. For more than 20 years, UST has worked side by side with the world’s best companies to make a real impact through transformation. Powered by technology, inspired by people and led by purpose, UST partners with their clients from design to operation. With deep domain expertise and a future-proof philosophy, UST embeds innovation and agility into their clients’ organizations. With over 30,000 employees in 30 countries, UST builds for boundless impact—touching billions of lives in the process.
Posted 1 week ago
8.0 years
0 - 0 Lacs
Hyderābād
On-site
To get the best candidate experience, please consider applying for a maximum of 3 roles within 12 months to ensure you are not duplicating efforts. Job Category Data Job Details About Salesforce Salesforce is the #1 AI CRM, where humans with agents drive customer success together. Here, ambition meets action. Tech meets trust. And innovation isn’t a buzzword — it’s a way of life. The world of work as we know it is changing and we're looking for Trailblazers who are passionate about bettering business and the world through AI, driving innovation, and keeping Salesforce's core values at the heart of it all. Ready to level-up your career at the company leading workforce transformation in the agentic era? You’re in the right place! Agentforce is the future of AI, and you are the future of Salesforce. About the Company - We’re Salesforce, the Customer Company, inspiring the future of business with AI+ Data +CRM. Leading with our core values, we help companies across every industry blaze new trails and connect with customers in a whole new way. And, we empower you to be a Trailblazer, too — driving your performance and career growth, charting new paths, and improving the state of the world. If you believe in business as the greatest platform for change and in companies doing well and doing good – you’ve come to the right place. About the role We’re looking for an experienced Data Scientist who will help us build marketing attribution, causal inference, and uplift models to improve the effectiveness and efficiency of our marketing efforts. This person will also design experiments and help us drive consistent approach to experimentation and campaign measurement to support a range of marketing, customer engagement, and digital use cases. This Lead Data Scientist brings significant experience in designing, developing, and delivering statistical models and AI/ML algorithms for marketing and digital optimization use cases on large-scale data sets in a cloud environment. They show rigor in how they prototype, test, and evaluate algorithm performance both in the testing phase of algorithm development and in managing production algorithms. They demonstrate advanced knowledge of statistical and machine learning techniques along with ensuring the ethical use of data in the algorithm design process. At Salesforce, Trust is our number one value and we expect all applications of statistical and machine learning models to adhere to our values and policies to ensure we balance business needs with responsible uses of technology. Responsibilities As part of the Marketing Effectiveness Data Science team within the Salesforce Marketing Data Science organization, develop statistical and machine learning models to improve marketing effectiveness - e.g., marketing attribution models, causal inference models, uplift models, etc. Develop optimization and simulation algorithms to provide marketing investment and allocation recommendations to improve ROI by optimizing spend across marketing channels. Own the full lifecycle of model development from ideation and data exploration, algorithm design and testing, algorithm development and deployment, to algorithm monitoring and tuning in production. Design experiments to support marketing, customer experience, and digital campaigns and develop statistically sound models to measure impact. Collaborate with other data scientists to develop and operationalize consistent approaches to experimentation and campaign measurement. Be a master in cross-functional collaboration by developing deep relationships with key partners across the company and coordinating with working teams. Constantly learn, have a clear pulse on innovation across the enterprise SaaS, AdTech, paid media, data science, customer data, and analytics communities. Required Skills 8+ years of experience designing models for marketing optimization such as multi-channel attribution models, customer lifetime value models, propensity models, uplift models, etc. using statistical and machine learning techniques. 8+ years of experience using advanced statistical techniques for experiment design (A/B and multi-cell testing) and causal inference methods for understanding business impact. Must have multiple, robust examples of using these techniques to measure effectiveness of marketing efforts and to solve business problems on large-scale data sets. 8+ years of experience with one or more programming languages such as Python, R, PySpark, Java. Expert-level knowledge of SQL with strong data exploration and manipulation skills. Experience using cloud platforms such as GCP and AWS for model development and operationalization is preferred. Must have superb quantitative reasoning and interpretation skills with strong ability to provide analysis-driven business insight and recommendations. Excellent written and verbal communication skills; ability to work well with peers and leaders across data science, marketing, and engineering organizations. Creative problem-solver who simplifies problems to their core elements. B2B customer data experience a big plus. Advanced Salesforce product knowledge is also a plus. Unleash Your Potential When you join Salesforce, you’ll be limitless in all areas of your life. Our benefits and resources support you to find balance and be your best , and our AI agents accelerate your impact so you can do your best . Together, we’ll bring the power of Agentforce to organizations of all sizes and deliver amazing experiences that customers love. Apply today to not only shape the future — but to redefine what’s possible — for yourself, for AI, and the world. Accommodations If you require assistance due to a disability applying for open positions please submit a request via this Accommodations Request Form . Posting Statement Salesforce is an equal opportunity employer and maintains a policy of non-discrimination with all employees and applicants for employment. What does that mean exactly? It means that at Salesforce, we believe in equality for all. And we believe we can lead the path to equality in part by creating a workplace that’s inclusive, and free from discrimination. Know your rights: workplace discrimination is illegal. Any employee or potential employee will be assessed on the basis of merit, competence and qualifications – without regard to race, religion, color, national origin, sex, sexual orientation, gender expression or identity, transgender status, age, disability, veteran or marital status, political viewpoint, or other classifications protected by law. This policy applies to current and prospective employees, no matter where they are in their Salesforce employment journey. It also applies to recruiting, hiring, job assignment, compensation, promotion, benefits, training, assessment of job performance, discipline, termination, and everything in between. Recruiting, hiring, and promotion decisions at Salesforce are fair and based on merit. The same goes for compensation, benefits, promotions, transfers, reduction in workforce, recall, training, and education.
Posted 1 week ago
5.0 years
4 - 6 Lacs
Gurgaon
On-site
With 5 years of experience in Python, PySpark, and SQL, you will have the necessary skills to handle a variety of tasks. You will also have hands-on experience with AWS services, including Glue, EMR, Lambda, S3, EC2, and Redshift. Your work mode will be based out of the Virtusa office, allowing you to collaborate with a team of experts. Your main skills should include Scala, Kafka, PySpark, and AWS Native Data Services, as these are mandatory for the role. Additionally, having knowledge in Big Data will be a nice to have skill that will set you apart from other candidates. About Virtusa Teamwork, quality of life, professional and personal development: values that Virtusa is proud to embody. When you join us, you join a team of 27,000 people globally that cares about your growth — one that seeks to provide you with exciting projects, opportunities and work with state of the art technologies throughout your career with us. Great minds, great potential: it all comes together at Virtusa. We value collaboration and the team environment of our company, and seek to provide great minds with a dynamic place to nurture new ideas and foster excellence. Virtusa was founded on principles of equal opportunity for all, and so does not discriminate on the basis of race, religion, color, sex, gender identity, sexual orientation, age, non-disqualifying physical or mental disability, national origin, veteran status or any other basis covered by appropriate law. All employment is decided on the basis of qualifications, merit, and business need.
Posted 1 week ago
5.0 years
5 - 10 Lacs
Gurgaon
On-site
Senior Manager EXL/SM/1411366 Digital SolutionsGurgaon Posted On 09 Jul 2025 End Date 23 Aug 2025 Required Experience 5 - 15 Years Basic Section Number Of Positions 1 Band C2 Band Name Senior Manager Cost Code G090140 Campus/Non Campus NON CAMPUS Employment Type Permanent Requisition Type New Max CTC 2000000.0000 - 4000000.0000 Complexity Level Not Applicable Work Type Hybrid – Working Partly From Home And Partly From Office Organisational Group EXL Digital Sub Group Digital Solutions Organization Digital Solutions LOB Digital Delivery Practice SBU Automation & Technology Country India City Gurgaon Center IN Gurgaon C61 Skills Skill PYTHON SAS Minimum Qualification B.TECH/B.E Certification No data available Job Description We are looking for Data Engineers with expertise in SAS, Python, and PySpark to support code migration and data migration projects from legacy environments to cloud platforms. This role will entail hands-on experience leveraging EXL’s Generative AI solution named Code Harbor to streamline migration processes, automate code refactoring, and optimize data transformation. The ideal candidate will have 5+ years of relevant experience in IT services, with strong knowledge of modernizing data pipelines, transforming legacy codebases, and optimizing big data processing for cloud infrastructure. Key Responsibilities: Code migration from SAS/ legacy systems to Python/ cloud-native frameworks. Develop and optimize enhanced data pipelines using PySpark for efficient cloud-based processing. Refactor and modernize legacy SAS-based workflows , ensuring seamless AI-assisted translation for cloud execution. Ensure data integrity, security, and performance throughout the migration lifecycle. Troubleshoot AI-generated outputs to refine accuracy and resolve migration-related challenges. Required Skills & Qualifications: Strong expertise in SAS, Python, and PySpark , with experience in code migration and data transformation . Strong problem-solving skills and adaptability in fast-paced AI-driven migration projects . Excellent communication and collaboration skills to work with cross-functional teams. Education Background: Bachelor’s or master’s degree in computer science, Engineering, or a related field. Tier I/II candidates preferred. Folks with shorter notice period to be preferred. Workflow Workflow Type L&S-DA-Consulting
Posted 1 week ago
3.0 years
4 - 6 Lacs
Gurgaon
On-site
Company Description At Nielsen, we are passionate about our work to power a better media future for all people by providing powerful insights that drive client decisions and deliver extraordinary results. Our talented, global workforce is dedicated to capturing audience engagement with content - wherever and whenever it’s consumed. Together, we are proudly rooted in our deep legacy as we stand at the forefront of the media revolution. When you join Nielsen, you will join a dynamic team committed to excellence, perseverance, and the ambition to make an impact together. We champion you, because when you succeed, we do too. We enable your best to power our future. Job Description This role will be part of a team that develops software that processes data captured every day from over a quarter of a million Computer and Mobile devices worldwide. Measuring panelists activities as they surf the Internet via Browsers, or utilizing Mobile App’s download from Apple’s and Google’s store. The Nielsen software meter used to capture this usage data has been optimized to be unobtrusive yet gather many biometric data points that the backend system can use to identify who is using the device, and also detect fraudulent behavior. The Software Engineer is ultimately responsible for delivering technical solutions: starting from the project's onboard until post launch support and including design, development, testing. It is expected to coordinate, support and work with multiple delocalized project teams in multiple regions. As a member of the technical staff with our Digital Meter Processing team, you will further develop the backend system that processes massive amounts of data every day, across 3 different AWS regions. Your role will involve designing, implementing, and maintaining robust, scalable solutions that leverage a Java based system that runs in an AWS environment. You will play a key role in shaping the technical direction of our projects and mentoring other team members. Qualifications Responsibilities System Deployment: Conceive, design and build new features in the existing backend processing pipelines. CI/CD Implementation: Design and implement CI/CD pipelines for automated build, test, and deployment processes. Ensure continuous integration and delivery of features, improvements, and bug fixes. Code Quality and Best Practices: Enforce coding standards, best practices, and design principles. Conduct code reviews and provide constructive feedback to maintain high code quality. Performance Optimization: Identify and address performance bottlenecks in both reading, processing and writing data to the backend data stores. Mentorship and Collaboration: Mentor junior engineers, providing guidance on technical aspects and best practices. Collaborate with cross-functional teams to ensure a cohesive and unified approach to software development. Security and Compliance: Implement security best practices for all tiers of the system. Ensure compliance with industry standards and regulations related to AWS platform security. Key Skills Bachelor's or Master’s degree in Computer Science, Software Engineering, or a related field. Proven experience, minimum 3 years, in high-volume data processing development expertise using ETL tools such as AWS Glue or PySpark, Java, SQL and databases such as Postgres Minimum 2 years development on an AWS platform Strong understanding of CI/CD principles and tools. GitLab a plus Excellent problem-solving and debugging skills. Strong communication and collaboration skills with ability to communicate complex technical concepts and align organization on decisions Sound problem-solving skills with the ability to quickly process complex information and present it clearly and simply Utilizes team collaboration to create innovative solutions efficiently Other desirable skills Knowledge of networking principles and security best practices. AWS certifications Experience with Data Warehouses, ETL, and/or Data Lakes very desirable Experience with RedShift, Airflow, Python, Lambda, Prometheus, Grafana, & OpsGeni a bonus Exposure to the Google Cloud Platform (GCP) Additional Information Please be aware that job-seekers may be at risk of targeting by scammers seeking personal data or money. Nielsen recruiters will only contact you through official job boards, LinkedIn, or email with a nielsen.com domain. Be cautious of any outreach claiming to be from Nielsen via other messaging platforms or personal email addresses. Always verify that email communications come from an @nielsen.com address. If you're unsure about the authenticity of a job offer or communication, please contact Nielsen directly through our official website or verified social media channels.
Posted 1 week ago
5.0 years
4 - 6 Lacs
Gurgaon
On-site
With 5 years of experience in Python, PySpark, and SQL, you will have the necessary skills to handle a variety of tasks. You will also have hands-on experience with AWS services, including Glue, EMR, Lambda, S3, EC2, and Redshift. Your work mode will be based out of the Virtusa office, allowing you to collaborate with a team of experts. Your main skills should include Scala, Kafka, PySpark, and AWS Native Data Services, as these are mandatory for the role. Additionally, having knowledge in Big Data will be a nice to have skill that will set you apart from other candidates. About Virtusa Teamwork, quality of life, professional and personal development: values that Virtusa is proud to embody. When you join us, you join a team of 27,000 people globally that cares about your growth — one that seeks to provide you with exciting projects, opportunities and work with state of the art technologies throughout your career with us. Great minds, great potential: it all comes together at Virtusa. We value collaboration and the team environment of our company, and seek to provide great minds with a dynamic place to nurture new ideas and foster excellence. Virtusa was founded on principles of equal opportunity for all, and so does not discriminate on the basis of race, religion, color, sex, gender identity, sexual orientation, age, non-disqualifying physical or mental disability, national origin, veteran status or any other basis covered by appropriate law. All employment is decided on the basis of qualifications, merit, and business need.
Posted 1 week ago
0 years
0 Lacs
Kochi, Kerala, India
On-site
Role Description The Snowflake Developer will play a crucial role in designing, developing, and implementing data solutions using Snowflake's cloud-based data platform. The developer will be responsible for writing efficient procedures with Spark or SQL to facilitate data processing, transformation, and analysis. Python/Pyspark and SQL has to be strong, should have some experience about data pipelines or other data engineering aspects Knowledge on AWS platform- He/she must have interest on upskilling and eager to learn, should have right attitude towards learning. Good expertise in SDLC/Agile Experience in SQL, complex queries, and optimization Experience in Spark ecosystem, and familiarity with MongoDB data loads, Snowflake and AWS platform (EMR, Glue, S3 Hands on experience in writing advanced SQL queries, familiarity with variety of databases Experience in handling end to end data testing for complex big data projects, which includes extensive experience in writing and executing test cases, performing data validations, system testing and performance checks Skills Snowflake development, Python, Pyspark, AWS
Posted 1 week ago
5.0 years
7 - 9 Lacs
Gurgaon
On-site
With 5 years of experience in Python, PySpark, and SQL, you will have the necessary skills to handle a variety of tasks. You will also have hands-on experience with AWS services, including Glue, EMR, Lambda, S3, EC2, and Redshift. Your work mode will be based out of the Virtusa office, allowing you to collaborate with a team of experts. Your main skills should include Scala, Kafka, PySpark, and AWS Native Data Services, as these are mandatory for the role. Additionally, having knowledge in Big Data will be a nice to have skill that will set you apart from other candidates. About Virtusa Teamwork, quality of life, professional and personal development: values that Virtusa is proud to embody. When you join us, you join a team of 27,000 people globally that cares about your growth — one that seeks to provide you with exciting projects, opportunities and work with state of the art technologies throughout your career with us. Great minds, great potential: it all comes together at Virtusa. We value collaboration and the team environment of our company, and seek to provide great minds with a dynamic place to nurture new ideas and foster excellence. Virtusa was founded on principles of equal opportunity for all, and so does not discriminate on the basis of race, religion, color, sex, gender identity, sexual orientation, age, non-disqualifying physical or mental disability, national origin, veteran status or any other basis covered by appropriate law. All employment is decided on the basis of qualifications, merit, and business need.
Posted 1 week ago
3.0 - 12.0 years
5 - 10 Lacs
Gurgaon
On-site
Manager EXL/M/1411347 Digital SolutionsGurgaon Posted On 09 Jul 2025 End Date 23 Aug 2025 Required Experience 3 - 12 Years Basic Section Number Of Positions 1 Band C1 Band Name Manager Cost Code G090140 Campus/Non Campus NON CAMPUS Employment Type Permanent Requisition Type New Max CTC 2000000.0000 - 3500000.0000 Complexity Level Not Applicable Work Type Hybrid – Working Partly From Home And Partly From Office Organisational Group EXL Digital Sub Group Digital Solutions Organization Digital Solutions LOB Digital Consulting & Implementation SBU Architecture & Development Country India City Gurgaon Center IN Gurgaon C61 Skills Skill PYTHON SAS Minimum Qualification B.TECH/B.E Certification No data available Job Description We are looking for Data Engineers with expertise in SAS, Python, and PySpark to support code migration and data migration projects from legacy environments to cloud platforms. This role will entail hands-on experience leveraging EXL’s Generative AI solution named Code Harbor to streamline migration processes, automate code refactoring, and optimize data transformation. The ideal candidate will have 2-3 years of relevant experience in IT services, with strong knowledge of modernizing data pipelines, transforming legacy codebases, and optimizing big data processing for cloud infrastructure. Key Responsibilities: Code migration from SAS/ legacy systems to Python/ cloud-native frameworks. Develop and optimize enhanced data pipelines using PySpark for efficient cloud-based processing. Refactor and modernize legacy SAS-based workflows , ensuring seamless AI-assisted translation for cloud execution. Ensure data integrity, security, and performance throughout the migration lifecycle. Troubleshoot AI-generated outputs to refine accuracy and resolve migration-related challenges. Required Skills & Qualifications: Strong expertise in SAS, Python, and PySpark , with experience in code migration and data transformation . Strong problem-solving skills and adaptability in fast-paced AI-driven migration projects . Excellent communication and collaboration skills to work with cross-functional teams. Education Background: Bachelor’s or master’s degree in computer science, Engineering, or a related field. Tier I/II candidates preferred. Folks with shorter notice period to be preferred. Workflow Workflow Type L&S-DA-Consulting
Posted 1 week ago
5.0 years
0 Lacs
Gurgaon, Haryana, India
On-site
With 5 years of experience in Python, PySpark, and SQL, you will have the necessary skills to handle a variety of tasks. You will also have hands-on experience with AWS services, including Glue, EMR, Lambda, S3, EC2, and Redshift. Your work mode will be based out of the Virtusa office, allowing you to collaborate with a team of experts. Your main skills should include Scala, Kafka, PySpark, and AWS Native Data Services, as these are mandatory for the role. Additionally, having knowledge in Big Data will be a nice to have skill that will set you apart from other candidates.
Posted 1 week ago
4.0 years
0 Lacs
Hyderabad, Telangana, India
Remote
Overview PepsiCo operates in an environment undergoing immense and rapid change. Big-data and digital technologies are driving business transformation that is unlocking new capabilities and business innovations in areas like eCommerce, mobile experiences and IoT. The key to winning in these areas is being able to leverage enterprise data foundations built on PepsiCo’s global business scale to enable business insights, advanced analytics, and new product development. PepsiCo’s Data Management and Operations team is tasked with the responsibility of developing quality data collection processes, maintaining the integrity of our data foundations, and enabling business leaders and data scientists across the company to have rapid access to the data they need for decision-making and innovation. What PepsiCo Data Management and Operations does: Maintain a predictable, transparent, global operating rhythm that ensures always-on access to high-quality data for stakeholders across the company. Responsible for day-to-day data collection, transportation, maintenance/curation, and access to the PepsiCo corporate data asset Work cross-functionally across the enterprise to centralize data and standardize it for use by business, data science or other stakeholders. Increase awareness about available data and democratize access to it across the company. As a data engineer, you will be the key technical expert building PepsiCo's data products to drive a strong vision. You'll be empowered to create data pipelines into various source systems, rest data on the PepsiCo Data Lake, and enable exploration and access for analytics, visualization, machine learning, and product development efforts across the company. As a member of the data engineering team, you will help developing very large and complex data applications into public cloud environments directly impacting the design, architecture, and implementation of PepsiCo's flagship data products around topics like revenue management, supply chain, manufacturing, and logistics. You will work closely with process owners, product owners and business users. You'll be working in a hybrid environment with in-house, on-premises data sources as well as cloud and remote systems. Responsibilities Act as a subject matter expert across different digital projects. Oversee work with internal clients and external partners to structure and store data into unified taxonomies and link them together with standard identifiers. Manage and scale data pipelines from internal and external data sources to support new product launches and drive data quality across data products. Build and own the automation and monitoring frameworks that captures metrics and operational KPIs for data pipeline quality and performance. Responsible for implementing best practices around systems integration, security, performance, and data management. Empower the business by creating value through the increased adoption of data, data science and business intelligence landscape. Collaborate with internal clients (data science and product teams) to drive solutioning and POC discussions. Evolve the architectural capabilities and maturity of the data platform by engaging with enterprise architects and strategic internal and external partners. Develop and optimize procedures to “productionalize” data science models. Define and manage SLA’s for data products and processes running in production. Support large-scale experimentation done by data scientists. Prototype new approaches and build solutions at scale. Research in state-of-the-art methodologies. Create documentation for learnings and knowledge transfer. Create and audit reusable packages or libraries. Qualifications 4+ years of overall technology experience that includes at least 3+ years of hands-on software development, data engineering, and systems architecture. 3+ years of experience with Data Lake Infrastructure, Data Warehousing, and Data Analytics tools. 3+ years of experience in SQL optimization and performance tuning, and development experience in programming languages like Python, PySpark, Scala etc.). 2+ years in cloud data engineering experience in Azure. Fluent with Azure cloud services. Azure Certification is a plus. Experience in Azure Log Analytics Experience with integration of multi cloud services with on-premises technologies. Experience with data modelling, data warehousing, and building high-volume ETL/ELT pipelines. Experience with data profiling and data quality tools like Apache Griffin, Deequ, and Great Expectations. Experience building/operating highly available, distributed systems of data extraction, ingestion, and processing of large data sets. Experience with at least one MPP database technology such as Redshift, Synapse or Snowflake. Experience with Azure Data Factory, Azure Databricks and Azure Machine learning tools. Experience with Statistical/ML techniques is a plus. Experience with building solutions in the retail or in the supply chain space is a plus. Experience with version control systems like Github and deployment & CI tools. Working knowledge of agile development, including DevOps and DataOps concepts. B Tech/BA/BS in Computer Science, Math, Physics, or other technical fields. Skills, Abilities, Knowledge: Excellent communication skills, both verbal and written, along with the ability to influence and demonstrate confidence in communications with senior level management. Strong change manager. Comfortable with change, especially that which arises through company growth. Ability to understand and translate business requirements into data and technical requirements. High degree of organization and ability to manage multiple, competing projects and priorities simultaneously. Positive and flexible attitude to enable adjusting to different needs in an ever-changing environment. Strong organizational and interpersonal skills; comfortable managing trade-offs. Foster a team culture of accountability, communication, and self-management. Proactively drives impact and engagement while bringing others along. Consistently attain/exceed individual and team goals.
Posted 1 week ago
5.0 years
0 Lacs
Gurgaon, Haryana, India
On-site
With 5 years of experience in Python, PySpark, and SQL, you will have the necessary skills to handle a variety of tasks. You will also have hands-on experience with AWS services, including Glue, EMR, Lambda, S3, EC2, and Redshift. Your work mode will be based out of the Virtusa office, allowing you to collaborate with a team of experts. Your main skills should include Scala, Kafka, PySpark, and AWS Native Data Services, as these are mandatory for the role. Additionally, having knowledge in Big Data will be a nice to have skill that will set you apart from other candidates.
Posted 1 week ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
39817 Jobs | Dublin
Wipro
19388 Jobs | Bengaluru
Accenture in India
15458 Jobs | Dublin 2
EY
14907 Jobs | London
Uplers
11185 Jobs | Ahmedabad
Amazon
10459 Jobs | Seattle,WA
IBM
9256 Jobs | Armonk
Oracle
9226 Jobs | Redwood City
Accenture services Pvt Ltd
7971 Jobs |
Capgemini
7704 Jobs | Paris,France