Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
7.0 years
0 Lacs
Pune, Maharashtra, India
On-site
Key responsibilities: Partner with business, product, and engineering teams to define problem statements, evaluate feasibility, and design AI/ML-driven solutions that deliver measurable business value. Lead and execute end-to-end AI/ML projects — from data exploration and model development to validation, deployment, and monitoring in production. Drive solution architecture using advanced techniques in machine learning, NLP, Generative AI, and statistical modeling. Champion the scalability, reproducibility, and sustainability of AI solutions by establishing best practices in model development, CI/CD, and performance tracking. Guide junior and associate AI/ML scientists through technical mentoring, code reviews, and solution reviews. Identify and evangelize the adoption of emerging tools, technologies, and methodologies across teams. Translate technical outputs into actionable insights for business stakeholders through storytelling, data visualizations, and stakeholder engagement. We are looking for: A seasoned AI/ML scientist with 7+ years of hands-on experience delivering enterprise-grade AI/ML solutions. Advanced proficiency in Python, SQL, PySpark, and experience working with cloud platforms (Azure preferred) and tools such as Databricks, Synapse, ADF, and Web Apps. Strong expertise in text analytics, NLP, and Generative AI, with real-world deployment exposure. Solid understanding of model evaluation, optimization, bias mitigation, and monitoring in production. A problem solver with scientific rigor, strong business acumen, and the ability to bridge the gap between data and decisions. Prior experience in leading cross-functional AI initiatives or collaborating with engineering teams to deploy ML pipelines. Bachelor's or master’s degree in computer science, Engineering, Statistics, or a related quantitative field. A PhD is a plus. Prior understanding in business domain of shipping and logistics is an advantage. Maersk is committed to a diverse and inclusive workplace, and we embrace different styles of thinking. Maersk is an equal opportunities employer and welcomes applicants without regard to race, colour, gender, sex, age, religion, creed, national origin, ancestry, citizenship, marital status, sexual orientation, physical or mental disability, medical condition, pregnancy or parental leave, veteran status, gender identity, genetic information, or any other characteristic protected by applicable law. We will consider qualified applicants with criminal histories in a manner consistent with all legal requirements. We are happy to support your need for any adjustments during the application and hiring process. If you need special assistance or an accommodation to use our website, apply for a position, or to perform a job, please contact us by emailing accommodationrequests@maersk.com.
Posted 2 weeks ago
4.0 - 7.0 years
0 Lacs
Pune, Maharashtra, India
On-site
Key responsibilities: Collaborate with business, platform and technology stakeholders to understand the scope of projects. Perform comprehensive exploratory data analysis at various levels of granularity of data to derive inferences for further solutioning/experimentation/evaluation. Design, develop and deploy robust enterprise AI solutions using Generative AI, NLP, machine learning, etc. Continuously focus on providing business value while ensuring technical sustainability. Promote and drive adoption of cutting-edge data science and AI practices within the team. Continuously stay up to date on relevant technologies and use this knowledge to push the team forward. We are looking for: A team player having 4-7 years of experience in the field of data science and AI. Proficiency with programming/querying languages like python, SQL, pyspark along with Azure cloud platform tools like databricks, ADF, synapse, web app, etc. An individual with strong work experience in areas of text analytics, NLP and Generative AI. A person with a scientific and analytical thinking mindset comfortable with brainstorming and ideation. A doer with deep interest in driving business outcomes through AI/ML. A candidate with bachelor’s or master’s degree in engineering, computer science with/withput a specialization within the field of AI/ML. A candidate with strong business acumen and desire to collaborate with business teams and help them by solving business problems. Prior understanding in business domain of shipping and logistics is an advantage. Maersk is committed to a diverse and inclusive workplace, and we embrace different styles of thinking. Maersk is an equal opportunities employer and welcomes applicants without regard to race, colour, gender, sex, age, religion, creed, national origin, ancestry, citizenship, marital status, sexual orientation, physical or mental disability, medical condition, pregnancy or parental leave, veteran status, gender identity, genetic information, or any other characteristic protected by applicable law. We will consider qualified applicants with criminal histories in a manner consistent with all legal requirements. We are happy to support your need for any adjustments during the application and hiring process. If you need special assistance or an accommodation to use our website, apply for a position, or to perform a job, please contact us by emailing accommodationrequests@maersk.com.
Posted 2 weeks ago
3.0 years
0 Lacs
Greater Kolkata Area
On-site
Line of Service Advisory Industry/Sector Not Applicable Specialism Data, Analytics & AI Management Level Senior Associate Job Description & Summary At PwC, our people in data and analytics focus on leveraging data to drive insights and make informed business decisions. They utilise advanced analytics techniques to help clients optimise their operations and achieve their strategic goals. In business intelligence at PwC, you will focus on leveraging data and analytics to provide strategic insights and drive informed decision-making for clients. You will develop and implement innovative solutions to optimise business performance and enhance competitive advantage. Why PWC At PwC, you will be part of a vibrant community of solvers that leads with trust and creates distinctive outcomes for our clients and communities. This purpose-led and values-driven work, powered by technology in an environment that drives innovation, will enable you to make a tangible impact in the real world. We reward your contributions, support your wellbeing, and offer inclusive benefits, flexibility programmes and mentorship that will help you thrive in work and life. Together, we grow, learn, care, collaborate, and create a future of infinite experiences for each other. Learn more about us . At PwC, we believe in providing equal employment opportunities, without any discrimination on the grounds of gender, ethnic background, age, disability, marital status, sexual orientation, pregnancy, gender identity or expression, religion or other beliefs, perceived differences and status protected by law. We strive to create an environment where each one of our people can bring their true selves and contribute to their personal growth and the firm’s growth. To enable this, we have zero tolerance for any discrimination and harassment based on the above considerations. A career within Data and Analytics services will provide you with the opportunity to help organisations uncover enterprise insights and drive business results using smarter data analytics. We focus on a collection of organisational technology capabilities, including business intelligence, data management, and data assurance that help our clients drive innovation, growth, and change within their organisations in order to keep up with the changing nature of customers and technology. We make impactful decisions by mixing mind and machine to leverage data, understand and navigate risk, and help our clients gain a competitive edge. Responsibilities: · 3+ years of experience in implementing analytical solutions using Palantir Foundry. · · preferably in PySpark and hyperscaler platforms (cloud services like AWS, GCP and Azure) with focus on building data transformation pipelines at scale. · · Team management: Must have experience in mentoring and managing large teams (20 to 30 people) for complex engineering programs. Candidate should have experience in hiring and nurturing talent in Palantir Foundry. · · Training: candidate should have experience in creating training programs in Foundry and delivering the same in a hands-on format either offline or virtually. · · At least 3 years of hands-on experience of building and managing Ontologies on Palantir Foundry. · · At least 3 years of experience with Foundry services: · · Data Engineering with Contour and Fusion · · Dashboarding, and report development using Quiver (or Reports) · · Application development using Workshop. · · Exposure to Map and Vertex is a plus · · Palantir AIP experience will be a plus · · Hands-on experience in data engineering and building data pipelines (Code/No Code) for ELT/ETL data migration, data refinement and data quality checks on Palantir Foundry. · · Hands-on experience of managing data life cycle on at least one hyperscaler platform (AWS, GCP, Azure) using managed services or containerized deployments for data pipelines is necessary. · · Hands-on experience in working & building on Ontology (esp. demonstrable experience in building Semantic relationships). · · Proficiency in SQL, Python and PySpark. Demonstrable ability to write & optimize SQL and spark jobs. Some experience in Apache Kafka and Airflow is a prerequisite as well. · · Hands-on experience on DevOps on hyperscaler platforms and Palantir Foundry is necessary. · · Experience in MLOps is a plus. · · Experience in developing and managing scalable architecture & working experience in managing large data sets. · · Opensource contributions (or own repositories highlighting work) on GitHub or Kaggle is a plus. · · Experience with Graph data and graph analysis libraries (like Spark GraphX, Python NetworkX etc.) is a plus. · · A Palantir Foundry Certification (Solution Architect, Data Engineer) is a plus. Certificate should be valid at the time of Interview. · · Experience in developing GenAI application is a plus Mandatory skill sets: · At least 3 years of hands-on experience of building and managing Ontologies on Palantir Foundry. · At least 3 years of experience with Foundry services Preferred skill sets: Palantir Foundry Years of experience required: Experience 4 to 7 years ( 3 + years relevant) Education qualification: Bachelor's degree in computer science, data science or any other Engineering discipline. Master’s degree is a plus. Education (if blank, degree and/or field of study not specified) Degrees/Field of Study required: Bachelor of Science Degrees/Field of Study preferred: Certifications (if blank, certifications not specified) Required Skills Palantir (Software) Optional Skills Accepting Feedback, Accepting Feedback, Active Listening, Analytical Thinking, Business Case Development, Business Data Analytics, Business Intelligence and Reporting Tools (BIRT), Business Intelligence Development Studio, Communication, Competitive Advantage, Continuous Process Improvement, Creativity, Data Analysis and Interpretation, Data Architecture, Database Management System (DBMS), Data Collection, Data Pipeline, Data Quality, Data Science, Data Visualization, Embracing Change, Emotional Regulation, Empathy, Inclusion, Industry Trend Analysis {+ 16 more} Desired Languages (If blank, desired languages not specified) Travel Requirements Not Specified Available for Work Visa Sponsorship? No Government Clearance Required? No Job Posting End Date
Posted 2 weeks ago
6.0 years
0 Lacs
Hyderabad, Telangana, India
Remote
Summary Position Summary Technical Lead – Big Data & Python skillset As a Technical Lead, you will be responsible as a strong full stack developer and individual contributor responsible to design application modules and deliver from the technical standpoint. High level of skills in coming up with high level design working with the architect and lead in module implementations technically. Must be a strong developer and ability to innovative. Should be a go to person on the assigned modules, applications/ projects and initiatives. Maintains appropriate certifications and applies respective skills on project engagements. Work you’ll do A unique opportunity to be a part of growing Delivery, methods & Tools team that drives consistency, quality, and efficiency of the services delivered to stakeholders. Responsibilities: Full stack hands on developer and strong individual contributor. Go-to person on the assigned projects. Able to understand and implement the project as per the proposed Architecture. Implements best Design Principles and Patterns. Understands and implements the security aspects of the application. Knows ADO and is familiar with using ADO. Obtains/maintains appropriate certifications and applies respective skills on project engagements. Leads or contributes significantly to Practice. Estimates and prioritizes Product Backlogs. Defines work items. Works on unit test automation. Recommend improvements to existing software programs as deemed necessary. Go-to person in the team for any technical issues. Conduct Peer Reviews Conducts Tech sessions within Team. Provides input to standards and guidelines. Implements best practices to enable consistency across all projects. Participate in the continuous improvement processes, as assigned. Mentors and coaches Juniors in the Team. Contributes to POCs. Supports the QA team with clarifications/ doubts. Takes ownership of the deployment, Tollgate, and deployment activities. Oversees the development of documentation. Participates in regular work, status communications and stakeholder updates. Supports development of intellectual capital. Contributes to knowledge network. Acts as a technical escalation point. Conducts sprint review. Does code Optimization and suggests team on the best practices. Skills: Education qualification : BE /B Tech ( IT/CS/Electronics) / MCA / MSc Computer science 6-9years ofIT experience in application development , support or maintenance activities 2+ years of experience in team management. Must have in-depth knowledge of software development lifecycles including agile development and testing. Enterprise Data Management framework , data security & Compliance( optional ). Data Ingestion, Storage n Transformation Data Auditing n Validation ( optional ) Data Visualization with Power BI ( optional ) Data Analytics systems ( optional ) Scaling and Handling large data sets. Designing & Building Data Services using At least 2+ years’ in : Azure SQL DB , SQL Wearhouse, ADF , Azure Storage, ADO CI/CD, Azure Synapse Data Model Design Data Entities : modeling and depiction. Metadata Mgmt( optional ). Database development patterns n practices : SQL / NoSQL ( Relation / Non-Relational – native JSON) , flexi schema, indexing practices, Master / child model data mgmt, Columnar , Row API / SDK for No SQL DBs Ops & Mgmt. Design and Implementation of Data warehouse, Azure Synapse, Data Lake, Delta lake Apace Spark Mgmt Programming Languages PySpark / Python , C#( optional ) API : Invoke / Request n Response PowerShell with Azure CLI ( optional ) Git with ADO Repo Mgmt, Branching Strategies Version control Mgmt Rebasing, filtering , cloning , merging Debugging & Perf Tuning n Optimization skills : Ability to analyze PySpark code, PL/SQL, . Enhancing response times GC Mgmt Debugging and Logging n Alerting techniques. Prior experience that demonstrates good business understanding is needed (experience in a professional services organization is a plus). Excellent written and verbal communications, organization, analytical, planning and leadership skills. Strong management, communication, technical and remote collaboration skill are a must. Experience in dealing with multiple projects and cross-functional teams, and ability to coordinate across teams in a large matrix organization environment. Ability to effectively conduct technical discussions directly with Project/Product management, and clients. Excellent team collaboration skills. Education & Experience: Education qualification: BE /B Tech ( IT/CS/Electronics) / MCA / MSc Computer science 6-9 years of Domain experience or other relevant industry experience. 2+ years of Product owner or Business Analyst or System Analysis experience. Minimum 3+ years of Software development experience in .NET projects. 3+ years of experiencing in Agile / scrum methodology Work timings: 9am-4pm, 7pm- 9pm Location: Hyderabad Experience: 6-9 yrs The team At Deloitte, Shared Services center improves overall efficiency and control while giving every business unit access to the company’s best and brightest resources. It is also lets business units focus on what really matters – satisfying customers and developing new products and services to sustain competitive advantage. A shared services center is a simple concept, but making it work is anything but easy. It involves consolidating and standardizing a wildly diverse collection of systems, processes, and functions. And if requires a high degree of cooperation among business units that generally are not accustomed to working together – with people who do not necessarily want to change. USI shared services team provides a wide array of services to the U.S. and it is constantly evaluating and expanding its portfolio. The shared services team provides call center support, Document Services support, financial processing and analysis support, Record management support, Ethics and compliance support and admin assistant support. How You’ll Grow At Deloitte, we’ve invested a great deal to create a rich environment in which our professionals can grow. We want all our people to develop in their own way, playing to their own strengths as they hone their leadership skills. And, as a part of our efforts, we provide our professionals with a variety of learning and networking opportunities—including exposure to leaders, sponsors, coaches, and challenging assignments—to help accelerate their careers along the way. No two people learn in exactly the same way. So, we provide a range of resources including live classrooms, team-based learning, and eLearning. DU: The Leadership Center in India, our state-of-the-art, world-class learning Center in the Hyderabad offices is an extension of the Deloitte University (DU) in Westlake, Texas, and represents a tangible symbol of our commitment to our people’s growth and development. Explore DU: The Leadership Center in India Benefits At Deloitte, we know that great people make a great organization. We value our people and offer employees a broad range of benefits. Learn more about what working at Deloitte can mean for you. Deloitte’s culture Our positive and supportive culture encourages our people to do their best work every day. We celebrate individuals by recognizing their uniqueness and offering them the flexibility to make daily choices that can help them to be healthy, centered, confident, and aware. We offer well-being programs and are continuously looking for new ways to maintain a culture that is inclusive, invites authenticity, leverages our diversity, and where our people excel and lead healthy, happy lives. Learn more about Life at Deloitte. Corporate citizenship Deloitte is led by a purpose: to make an impact that matters. This purpose defines who we are and extends to relationships with our clients, our people and our communities.We believe that business has the power to inspire and transform. We focus on education, giving, skill-based volunteerism, and leadership to help drive positive social impact in our communities. Learn more about Deloitte’s impact on the world. #CAP-PD Our purpose Deloitte’s purpose is to make an impact that matters for our people, clients, and communities. At Deloitte, purpose is synonymous with how we work every day. It defines who we are. Our purpose comes through in our work with clients that enables impact and value in their organizations, as well as through our own investments, commitments, and actions across areas that help drive positive outcomes for our communities. Our people and culture Our inclusive culture empowers our people to be who they are, contribute their unique perspectives, and make a difference individually and collectively. It enables us to leverage different ideas and perspectives, and bring more creativity and innovation to help solve our clients' most complex challenges. This makes Deloitte one of the most rewarding places to work. Professional development At Deloitte, professionals have the opportunity to work with some of the best and discover what works best for them. Here, we prioritize professional growth, offering diverse learning and networking opportunities to help accelerate careers and enhance leadership skills. Our state-of-the-art DU: The Leadership Center in India, located in Hyderabad, represents a tangible symbol of our commitment to the holistic growth and development of our people. Explore DU: The Leadership Center in India . Benefits To Help You Thrive At Deloitte, we know that great people make a great organization. Our comprehensive rewards program helps us deliver a distinctly Deloitte experience that helps that empowers our professionals to thrive mentally, physically, and financially—and live their purpose. To support our professionals and their loved ones, we offer a broad range of benefits. Eligibility requirements may be based on role, tenure, type of employment and/ or other criteria. Learn more about what working at Deloitte can mean for you. Recruiting tips From developing a stand out resume to putting your best foot forward in the interview, we want you to feel prepared and confident as you explore opportunities at Deloitte. Check out recruiting tips from Deloitte recruiters. Requisition code: 300914
Posted 2 weeks ago
3.0 years
0 Lacs
Pune, Maharashtra, India
On-site
YASH Technologies is a leading technology integrator specializing in helping clients reimagine operating models, enhance competitiveness, optimize costs, foster exceptional stakeholder experiences, and drive business transformation. At YASH, we’re a cluster of the brightest stars working with cutting-edge technologies. Our purpose is anchored in a single truth – bringing real positive changes in an increasingly virtual world and it drives us beyond generational gaps and disruptions of the future. We are looking forward to hire AWS Professionals in the following areas : AWS Data Engineer JD As Below Primary skillsets :AWS services including Glue, Pyspark, SQL, Databricks, Python Secondary skillset- Any ETL Tool, Github, DevOPs(CI-CD) Experience: 3-4yrs Degree in computer science, engineering, or similar fields Mandatory Skill Set: Python, PySpark , SQL, AWS with Designing , developing, testing and supporting data pipelines and applications. 3+ years working experience in data integration and pipeline development. 3+ years of Experience with AWS Cloud on data integration with a mix of Apache Spark, Glue, Kafka, Kinesis, and Lambda in S3 Redshift, RDS, MongoDB/DynamoDB ecosystems Databricks, Redshift experience is a major plus. 3+ years of experience using SQL in related development of data warehouse projects/applications (Oracle & amp; SQL Server) Strong real-life experience in python development especially in PySpark in AWS Cloud environment Strong SQL and NoSQL databases like MySQL, Postgres, DynamoDB, Elasticsearch Workflow management tools like Airflow AWS cloud services: RDS, AWS Lambda, AWS Glue, AWS Athena, EMR (equivalent tools in the GCP stack will also suffice) Good to Have : Snowflake, Palantir Foundry At YASH, you are empowered to create a career that will take you to where you want to go while working in an inclusive team environment. We leverage career-oriented skilling models and optimize our collective intelligence aided with technology for continuous learning, unlearning, and relearning at a rapid pace and scale. Our Hyperlearning workplace is grounded upon four principles Flexible work arrangements, Free spirit, and emotional positivity Agile self-determination, trust, transparency, and open collaboration All Support needed for the realization of business goals, Stable employment with a great atmosphere and ethical corporate culture
Posted 2 weeks ago
8.0 years
0 Lacs
Gurgaon, Haryana, India
On-site
Optum is a global organization that delivers care, aided by technology to help millions of people live healthier lives. The work you do with our team will directly improve health outcomes by connecting people with the care, pharmacy benefits, data and resources they need to feel their best. Here, you will find a culture guided by inclusion, talented peers, comprehensive benefits and career development opportunities. Come make an impact on the communities we serve as you help us advance health optimization on a global scale. Join us to start Caring. Connecting. Growing together. Primary Responsibilities Design and develop scalable systems for processing unstructured data into actionable insights using Python, Flask, and Azure Cognitive Services Integrate Optical Character Recognition (OCR), Speech-to-Text, and NLP models into workflows to handle various file formats such as PDFs, images, audio files, and text documents Implement robust error-handling mechanisms, multithreaded architectures, and RESTful APIs to ensure seamless user experiences. Utilize Azure OpenAI, Azure Speech SDK, and Azure Form Recognizer to create AI-powered solutions tailored to meet complex business requirements Collaborate with cross-functional teams to drive innovation and implement analytics workflows and ML models to enhance business processes and decision-making Ensure the accuracy, efficiency, and scalability of systems focusing on healthcare claims processing, document digitization, and data extraction Comply with the terms and conditions of the employment contract, company policies and procedures, and any and all directives (such as, but not limited to, transfer and/or re-assignment to different work locations, change in teams and/or work shifts, policies in regards to flexibility of work benefits and/or work environment, alternative work arrangements, and other decisions that may arise due to the changing business environment). The Company may adopt, vary or rescind these policies and directives in its absolute discretion and without any limitation (implied or otherwise) on its ability to do so Required Qualifications 8+ years of relevant experience in AI/ML engineering and cognitive automation Proven experience as an AI/ML Engineer, Software Engineer, Data Analyst, or a similar role in the tech industry Extensive experience with Azure Cognitive Services and other AI technologies SQL, Python, PySpark, Scala experience Proficient in developing and deploying machine learning models and handling large data sets Proven solid programming skills in Python and familiarity with Flask web framework Proven excellent problem-solving skills and the ability to work in a fast-paced environment Proven solid communication and collaboration skills, capable of working effectively with cross-functional teams. Demonstrated ability to implement robust ETL or ELT workflows for structured and unstructured data ingestion, transformation, and storage Preferred Qualification Experience in healthcare industries Skills Python Programming and SQL Data Analytics and Machine Learning Classification and Unsupervised Learning Regression and NLP Cloud and DevOps Foundations Data Visualization and Reporting At UnitedHealth Group, our mission is to help people live healthier lives and make the health system work better for everyone. We believe everyone-of every race, gender, sexuality, age, location and income-deserves the opportunity to live their healthiest life. Today, however, there are still far too many barriers to good health which are disproportionately experienced by people of color, historically marginalized groups and those with lower incomes. We are committed to mitigating our impact on the environment and enabling and delivering equitable care that addresses health disparities and improves health outcomes — an enterprise priority reflected in our mission.
Posted 2 weeks ago
7.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
7+ years of experience in Big Data with strong expertise in Spark and Scala Mandatory Skills: Big Data Primarily Spark and Scala Strong Knowledge in HDFS, Hive, Impala with knowledge on Unix , Oracle, Autosys, Good to Have : Agile Methodology and Banking Expertise Strong Communication Skills Not limited to Spark batch, need Spark streaming experience No SQL DB Experience : HBase/Mongo/Couchbase
Posted 2 weeks ago
2.0 years
0 Lacs
Mumbai Metropolitan Region
On-site
Job Description: Business Title QA Manager Years Of Experience 10+ Job Descreption The purpose of this role is to ensure the developed software meets the client requirements and the business’ quality standards within the project release cycle and established processes. To lead QA technical initiatives in order to optimize the test approach and tools. Must Have Skills At least 2 years in a lead role. Experience with Azure cloud. Testing file-based data lake solutions or Big data based solution. Worked on migration or implementation of Azure Data Factory projects. Strong experience in ETL/data pipeline testing, preferably with Azure Data Factory. Proficiency in SQL for data validation and test automation. Familiarity with Azure services: Data Lake, Synapse Analytics, Azure SQL, Key Vault, and Logic Apps. Experience with test management tools (e.g., Azure DevOps, JIRA, TestRail). Understanding of CI/CD pipelines and integration of QA in DevOps workflows. Experience with data quality frameworks (e.g., Great Expectations, Deequ). Knowledge of Python or PySpark for data testing automation. Exposure to Power BI or other BI tools for test result visualization. Azure Data Factory Exposure to Azure Databricks SQL/stored procedure on SQL Server ADLS Gen2 Exposure to Python/ Shell script Good To Have Skills Exposure to any ETL tool experience. Any other Cloud experience (AWS / GCP). Exposure to Spark architecture, including Spark Core, Spark SQL, DataFrame, Spark Streaming, and fault tolerance mechanisms. ISTQB or equivalent QA certification. Working experience on JIRA and Agile Experience with testing SOAP / API projects Stakeholder communication Microsoft Office Key responsibiltes Lead the QA strategy, planning, and execution for ADF-based data pipelines and workflows. Design and implement test plans, test cases, and test automation for data ingestion, transformation, and loading processes. Validate data accuracy, completeness, and integrity across source systems, staging, and target data stores (e.g., Azure SQL, Synapse, Data Lake). Collaborate with data engineers, architects, and business analysts to understand data flows and ensure test coverage. Develop and maintain automated data validation scripts using tools like PySpark, SQL, PowerShell, or Azure Data Factory Data Flows. Monitor and report on data quality metrics, defects, and test coverage. Ensure compliance with data governance, security, and privacy standards. Mentor junior QA team members and coordinate testing efforts across sprints. Education Qulification Minimum Bachelor’s degree in computer science, Information Systems, or related field. Certification If Any Any Basic level certification in AWS / AZURE / GCP Snowflake Associate / Core Shift timing 12 PM to 9 PM and / or 2 PM to 11 PM - IST time zone Location: DGS India - Mumbai - Goregaon Prism Tower Brand: Merkle Time Type: Full time Contract Type: Permanent
Posted 2 weeks ago
4.0 - 6.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Mandatory Skills 4-6 years of exp with basic proficiency in Python, SQL and familiarity with libraries like NumPy or Pandas. Understanding of fundamental programming concepts (data structures, algorithms, etc.). Eagerness to learn new tools and frameworks, including Generative AI technologies. Familiarity with version control systems (e.g., Git). Strong problem-solving skills and attention to detail. Exposure to data processing tools like Apache Spark or PySpark, SQL. Basic understanding of APIs and how to integrate them. Interest in AI/ML and willingness to explore frameworks like LangChain. Familiarity with cloud platforms (AWS, Azure, or GCP) is a plus Job Description We are seeking a motivated Python Developer to join our team. The ideal candidate will have a foundational understanding of Python programming, SQL and a passion for learning and growing in the field of software development. You will work closely with senior developers and contribute to building and maintaining applications, with opportunities to explore Generative AI frameworks and data processing tools. Key Responsibilities Assist in developing and maintaining Python-based applications. Write clean, efficient, and well-documented code. Collaborate with senior developers to integrate APIs and frameworks. Support data processing tasks using libraries like Pandas or PySpark. Learn and work with Generative AI frameworks (e.g., LangChain, LangGraph) under guidance. Debug and troubleshoot issues in existing applications.
Posted 2 weeks ago
5.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Job Description As a GCP Data Engineer, you will integrate data from various sources into novel data products. You will build upon existing analytical data, including merging historical data from legacy platforms with data ingested from new platforms. You will also analyze and manipulate large datasets, activating data assets to enable enterprise platforms and analytics within GCP. You will design and implement the transformation and modernization on GCP, creating scalable data pipelines that land data from source applications, integrate into subject areas, and build data marts and products for analytics solutions. You will also conduct deep-dive analysis of Current State Receivables and Originations data in our data warehouse, performing impact analysis related to Ford Credit North America's modernization and providing implementation solutions. Moreover, you will partner closely with our AI, data science, and product teams, developing creative solutions that build the future for Ford Credit. Experience with large-scale solutions and operationalizing data warehouses, data lakes, and analytics platforms on Google Cloud Platform or other cloud environments is a must. We are looking for candidates with a broad set of analytical and technology skills across these areas and who can demonstrate an ability to design the right solutions with the appropriate combination of GCP and 3rd party technologies for deployment on Google Cloud Platform. Responsibilities Design and build production data engineering solutions on Google Cloud Platform (GCP) using services such as BigQuery, Dataflow, DataForm, Astronomer, Data Fusion, DataProc, Cloud Composer/Air Flow, Cloud SQL, Compute Engine, Cloud Functions, Cloud Run, Artifact Registry, GCP APIs, Cloud Build, App Engine, and real-time data streaming platforms like Apache Kafka and GCP Pub/Sub. Design new solutions to better serve AI/ML needs. Lead teams to expand our AI-enabled services. Partner with governance teams to tackle key business needs. Collaborate with stakeholders and cross-functional teams to gather and define data requirements and ensure alignment with business objectives. Partner with analytics teams to understand how value is created using data. Partner with central teams to leverage existing solutions to drive future products. Design and implement batch, real-time streaming, scalable, and fault-tolerant solutions for data ingestion, processing, and storage. Create insights into existing data to fuel the creation of new data products. Perform necessary data mapping, impact analysis for changes, root cause analysis, and data lineage activities, documenting information flows. Implement and champion an enterprise data governance model. Actively promote data protection, sharing, reuse, quality, and standards to ensure data integrity and confidentiality. Develop and maintain documentation for data engineering processes, standards, and best practices. Ensure knowledge transfer and ease of system maintenance. Utilize GCP monitoring and logging tools to proactively identify and address performance bottlenecks and system failures. Provide production support by addressing production issues as per SLAs. Optimize data workflows for performance, reliability, and cost-effectiveness on the GCP infrastructure. Work within an agile product team. Deliver code frequently using Test-Driven Development (TDD), continuous integration, and continuous deployment (CI/CD). Continuously enhance your domain knowledge. Stay current on the latest data engineering practices. Contribute to the company's technical direction while maintaining a customer-centric approach. Qualifications GCP certified Professional Data Engineer Successfully designed and implemented data warehouses and ETL processes for over five years, delivering high-quality data solutions. 5+ years of complex SQL development experience 2+ experience with programming languages such as Python, Java, or Apache Beam. Experienced cloud engineer with 3+ years of GCP expertise, specializing in managing cloud infrastructure and applications to production-scale solutions. In-depth understanding of GCP’s underlying architecture and hands-on experience of crucial GCP services, especially those related to data processing (Batch/Real Time) leveraging Terraform, Big Query, Dataflow, Pub/Sub, Data form, astronomer, Data Fusion, DataProc, Pyspark, Cloud Composer/Air Flow, Cloud SQL, Compute Engine, Cloud Functions, Cloud Run, Cloud build and App Engine, alongside and storage including Cloud Storage DevOps tools such as Tekton, GitHub, Terraform, Docker. Expert in designing, optimizing, and troubleshooting complex data pipelines. Experience developing and deploying microservices architectures leveraging container orchestration frameworks Experience in designing pipelines and architectures for data processing. Passion and self-motivation to develop/experiment/implement state-of-the-art data engineering methods/techniques. Self-directed, work independently with minimal supervision, and adapts to ambiguous environments. Evidence of a proactive problem-solving mindset and willingness to take the initiative. Strong prioritization, collaboration & coordination skills, and ability to simplify and communicate complex ideas with cross-functional teams and all levels of management. Proven ability to juggle multiple responsibilities and competing demands while maintaining a high level of productivity. Master’s degree in computer science, software engineering, information systems, Data Engineering, or a related field. Data engineering or development experience gained in a regulated financial environment. Experience in coaching and mentoring Data Engineers Project management tools like Atlassian JIRA Experience working in an implementation team from concept to operations, providing deep technical subject matter expertise for successful deployment. Experience with data security, governance, and compliance best practices in the cloud. Experience using data science concepts on production datasets to generate insights
Posted 2 weeks ago
8.0 - 12.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
About Evernorth Evernorth℠ exists to elevate health for all, because we believe health is the starting point for human potential and progress. As champions for affordable, predictable and simple health care, we solve the problems others don’t, won’t or can’t. Our innovation hub in India will allow us to work with the right talent, expand our global footprint, improve our competitive stance, and better deliver on our promises to stakeholders. We are passionate about making healthcare better by delivering world-class solutions that make a real difference. We are always looking upward. And that starts with finding the right talent to help us get there. Position Overview Excited to grow your career? This position’s primary responsibility will be to translate software requirements into functions using Mainframe , ETL , Data Engineering with expertise in Databricks and Database technologies. This position offers the opportunity to work on modernizing legacy systems, contribute to cloud infrastructure automation, and support production systems in a fast-paced, agile environment. You will work across multiple teams and technologies to ensure reliable, high-performance data solutions that align with business goals. As a Mainframe & ETL Engineer, you will be responsible for the end-to-end development and support of data processing solutions using tools such as Talend, Ab Initio, AWS Glue, and PySpark, with significant work on Databricks and modern cloud data platforms. You will support infrastructure provisioning using Terraform, assist in modernizing legacy systems including mainframe migration, and contribute to performance tuning of complex SQL queries across multiple database platforms including Teradata, Oracle, Postgres, and DB2. You will also be involved in CI/CD practices Responsibilities Support, maintain and participate in the development of software utilizing technologies such as COBOL, DB2, CICS and JCL. Support, maintain and participate in the ETL development of software utilizing technologies such as Talend, Ab-Initio, Python, PySpark using Databricks. Work with Databricks to design and manage scalable data processing solutions. Implement and support data integration workflows across cloud (AWS) and on-premises environments. Support cloud infrastructure deployment and management using Terraform. Participate in the modernization of legacy systems, including mainframe migration. Perform complex SQL queries and performance tuning on large datasets. Contribute to CI/CD pipelines, version control, and infrastructure automation. Provide expertise, tools, and assistance to operations, development, and support teams for critical production issues and maintenance Troubleshoot production issues, diagnose the problem, and implement a solution - First line of defense in finding the root cause Work cross-functionally with the support team, development team and business team to efficiently address customer issues. Active member of high-performance software development and support team in an agile environment Engaged in fostering and improving organizational culture. Qualifications Required Skills: Strong analytical and technical skills. Proficiency in Databricks – including notebook development, Delta Lake, and Spark-based process. Experience with mainframe modernization or migrating legacy systems to modern data platforms. Strong programming skills, particularly in PySpark for data processing. Familiarity with data warehousing concepts and cloud-native architecture. Solid understanding of Terraform for managing infrastructure as code on AWS. Familiarity with CI/CD practices and tools (e.g., Git, Jenkins). Strong SQL knowledge on OLAP DB platforms (Teradata, Snowflake) and OLTP DB platforms (Oracle, DB2, Postgres, SingleStore). Strong experience with Teradata SQL and Utilities Strong experience with Oracle, Postgres and DB2 SQL and Utilities Develop high quality database solutions Ability to do extensive analysis on complex SQL processes and design skills Ability to analyze existing SQL queries for performance improvements Experience in software development phases including design, configuration, testing, debugging, implementation, and support of large-scale, business centric and process-based applications Proven experience working with diverse teams of technical architects, business users and IT areas on all phases of the software development life cycle. Exceptional analytical and problem-solving skills Structured, methodical approach to systems development and troubleshooting Ability to ramp up fast on a system architecture Experience in designing and developing process-based solutions or BPM (business process management) Strong written and verbal communication skills with the ability to interact with all levels of the organization. Strong interpersonal/relationship management skills. Strong time and project management skills. Familiarity with agile methodology including SCRUM team leadership. Familiarity with modern delivery practices such as continuous integration, behavior/test driven development, and specification by example. Desire to work in application support space Passion for learning and desire to explore all areas of IT. Required Experience & Education Minimum of 8-12 years of experience in application development role. Bachelor’s degree equivalent in Information Technology, Business Information Systems, Technology Management, or related field of study. Location & Hours of Work: Hyderabad and Hybrid (13:00 AM IST to 10:00 PM IST) Equal Opportunity Statement Evernorth is an Equal Opportunity Employer actively encouraging and supporting organization-wide involvement of staff in diversity, equity, and inclusion efforts to educate, inform and advance both internal practices and external work with diverse client populations. About Evernorth Health Services Evernorth Health Services, a division of The Cigna Group, creates pharmacy, care and benefit solutions to improve health and increase vitality. We relentlessly innovate to make the prediction, prevention and treatment of illness and disease more accessible to millions of people. Join us in driving growth and improving lives.
Posted 2 weeks ago
12.0 - 20.0 years
16 - 30 Lacs
Hyderabad, Bengaluru, Delhi / NCR
Work from Office
Role & responsibilities Role: Analytics Director, Pharma Analytics Role overview: Lead analytics engagements from discovery to delivery, managing key stakeholders across functions, and ensuring alignment with business goals. The position demands a strategic mindset to challenge conventional thinking, identify actionable insights, and deliver innovative, data-driven solutions. The individual will also ensure adherence to established policies in programming, documentation, and system management. Key Responsibilities: Lead end-to-end analytics engagements in the healthcare/pharma domain, managing cross-functional stakeholders across client and internal teams. Challenge conventional analytical approaches and proactively recommend innovative strategies and methodologies. Drive discovery and requirements gathering to ensure business context is accurately captured and translated into analytical solutions. Generate quantitatively-driven insights to solve complex commercial problems such as targeting, segmentation, campaign analytics, omnichannel analytics and performance optimization. Identify next-best actions and strategic recommendations to enhance brand performance, sales force effectiveness, launch analytics, test & control analysis, campaign effectiveness measurement etc. Ensure adherence to programming standards, project documentation protocols, and system management policies across analytics workstreams. Apply deep domain knowledge of pharma commercialization to design solutions aligned with industry-specific compliance, market access, and competitive dynamics. Preferred Qualifications: Bachelors or Masters degree in a quantitative discipline (e.g., Statistics, Economics, Mathematics, Engineering, Computer Science) or related field 12 years of Experience in commercial analytics in the pharmaceutical or healthcare domain Strong understanding of pharma commercial processes such as sales force effectiveness, brand performance tracking, patient analytics, and omnichannel marketing etc. Proficiency in analytical tools such as SQL, Python, Pyspark, Databricks for data manipulation and modelling. Familiarity with healthcare data sources like IQVIA (Xponent, DDD, LAAD), Symphony, APLD, EHR/EMR, and claims data. Proven ability to manage stakeholders and translate business needs into actionable analytics solutions. Strong communication and storytelling skills to present complex insights to both technical and non-technical audiences.
Posted 2 weeks ago
5.0 - 8.0 years
15 - 22 Lacs
Gurugram
Work from Office
Experience: 6- 8 years overall, with at least 23 years deep hands-on experience in each key area below. What you’ll do Own and evolve our end-to-end data platform, ensuring robust pipelines, data lakes, and warehouses with 100% uptime. Build and maintain real-time and batch pipelines using Debezium, Kafka, Spark, Apache Iceberg, Trino, and Clickhouse. Manage and optimize our databases (PostgreSQL, DocumentDB, MySQL RDS) for performance and reliability. Drive data quality management — understand, enrich, and maintain context for trustworthy insights. Develop and maintain reporting services for data exports, file deliveries, and embedded dashboards via Apache Superset. Use orchestration tools like Maestro (or similar DAGs) for reliable, observable workflows. Leverage LLMs and other AI models generate insights and automate agentic tasks that enhance analytics and reporting. Build domain expertise to solve complex data problems and deliver actionable business value. Collaborate with analysts, data scientists, and engineers to maximize the impact of our data assets. Write robust, production-grade Python code for pipelines, automation, and tooling. What you’ll bring Experience with our open-source data pipeline and datalike, warehouse stack Strong Python skills for data workflows and automation. Hands-on orchestration experience with Maestro, Airflow, or similar. Practical experience using LLMs or other AI models for data tasks. Solid grasp of data quality, enrichment, and business context. Experience with dashboards and BI using Apache Superset (or similar tools). Strong communication and problem-solving skills.
Posted 2 weeks ago
0 years
0 Lacs
Andhra Pradesh, India
On-site
Experience in building Pyspark process. Proficient in understanding distributed computing principles. Experience in managing Hadoop cluster with all services. Experience with Nosql Databases and Messaging systems like Kafka. Designing building installing configuring and supporting Hadoop Perform analysis of vast data stores. Good understanding of cloud technology. Must have strong technical experience in Design Mapping specifications HLD LLD. Must have the ability to relate to both business and technical members of the team and possess excellent communication skills. Leverage internal tools and SDKs, utilize AWS services such as S3, Athena, and Glue, and integrate with our internal Archival Service Platform for efficient data purging. Lead the integration efforts with the internal Archival Service Platform for seamless data purging and lifecycle management. Collaborate with the data engineering team to continuously improve data integration pipelines, ensuring adaptability to evolving business needs. Develop and maintain data platforms using Pyspark Work with AWS and Big Data, design and implement data pipelines, and ensure data quality and integrity Collaborate with crossfunctional teams to understand data requirements and design solutions that meet business needs Implement and manage agents for monitoring, logging, and automation within AWS environments Handling migration from PySpark to AWS
Posted 2 weeks ago
5.0 - 7.0 years
5 - 14 Lacs
Pune, Gurugram, Bengaluru
Work from Office
• Handson experience in objectoriented programming using Python, PySpark, APIs, SQL, BigQuery, GCP • Building data pipelines for huge volume of data • Dataflow Dataproc and BigQuery • Deep understanding of ETL concepts
Posted 2 weeks ago
5.0 - 8.0 years
5 - 8 Lacs
Bengaluru
Work from Office
Skills desired: Strong at SQL (Multi pyramid SQL joins) Python skills (FastAPI or flask framework) PySpark Commitment to work in overlapping hours GCP knowledge(BQ, DataProc and Dataflow) Amex experience is preferred(Not Mandatory) Power BI preferred (Not Mandatory) Flask, Pyspark, Python, Sql
Posted 2 weeks ago
0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Join us as a Software Engineer, PySpark This is an opportunity for a driven Software Engineer to take on an exciting new career challenge Day-to-day, you'll build a wide network of stakeholders of varying levels of seniority It’s a chance to hone your existing technical skills and advance your career We're offering this role at associate vice president level What you'll do In your new role, you’ll engineer and maintain innovative, customer centric, high performance, secure and robust solutions. You’ll be working within a feature team and using your extensive experience to engineer software, scripts and tools that are often complex, as well as liaising with other engineers, architects and business analysts across the platform. You’ll Also Be Producing complex and critical software rapidly and of high quality which adds value to the business Working in permanent teams who are responsible for the full life cycle, from initial development, through enhancement and maintenance to replacement or decommissioning Collaborating to optimise our software engineering capability Designing, producing, testing and implementing our working code Working across the life cycle, from requirements analysis and design, through coding to testing, deployment and operations The skills you'll need You’ll need a background in software engineering, software design, architecture, and an understanding of how your area of expertise supports our customers. You'll need at least nine years of experience in PySpark, SQL and AWS. You’ll Also Need Experience of working with development and testing tools, bug tracking tools and wikis Experience in multiple programming languages or low code toolsets Experience of DevOps, Testing and Agile methodology and associated toolsets A background in solving highly complex, analytical and numerical problems Experience of implementing programming best practice, especially around scalability, automation, virtualisation, optimisation, availability and performance
Posted 2 weeks ago
5.0 - 8.0 years
7 - 10 Lacs
Chennai, Bengaluru
Work from Office
Availability: Immediate preferred Key Responsibilities: Design and implement advanced data science workflows using Azure Databricks. Collaborate with cross-functional teams to scale data pipelines. Optimize and fine-tune PySpark jobs for performance and efficiency. Support real-time analytics and big data use cases in a remote-first agile environment. Required Skills: Proven experience in Databricks, PySpark, and big data architecture. Ability to work with data scientists to operationalize models. Strong understanding of data governance, security, and performance. Location: Delhi NCR,Bangalore,Chennai,Pune,Kolkata,Ahmedabad,Mumbai,Hyderabad
Posted 2 weeks ago
175.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
At American Express, our culture is built on a 175-year history of innovation, shared values and Leadership Behaviors, and an unwavering commitment to back our customers, communities, and colleagues. As part of Team Amex, you'll experience this powerful backing with comprehensive support for your holistic well-being and many opportunities to learn new skills, develop as a leader, and grow your career. Here, your voice and ideas matter, your work makes an impact, and together, you will help us define the future of American Express. How will you make an impact in this role? This role will be part of the Treasury Applications Platform team. We are currently modernizing our platform, migrating it to GCP. You will contribute towards making the platform more resilient and secure for future regulatory requirements and ensuring compliance and adherence to Federal Regulations. Preferably a BS or MS degree in computer science, computer engineering, or other technical discipline 10+ years of software development experience Ability to effectively interpret technical and business objectives and challenges and articulate solutions Willingness to learn new technologies and exploit them to their optimal potential Strong experience Finance, Controllership, Treasury Applications Strong background with Java, Python, Pyspark, SQL, Concurrency/parallelism, oracle, big data, in-memory computing platforms Cloud experience with GCP would be a preference Conduct IT requirements gathering. Define problems and provide solution alternatives. Solution Architecture and system design. Create detailed system design documentation. Implement deployment plans. Understand business requirements with the objective of providing high-quality IT solutions. Support team in different phases of the project including problem definition, effort estimation, diagnosis, solution generation, design and deployment. Under supervision participate in unit-level and organizational initiatives with the objective of providing high-quality and value adding consulting solutions. Troubleshoot issues, diagnose problems, and conduct root-cause analysis. Perform secondary research as instructed by supervisor to assist in strategy and business planning. Minimum Qualifications: Strong experience with Cloud architecture Deep understanding of SDLC, OOAD, CI/CD, Containerization, Agile, Java, PL/SQL Preferred Qualifications: GCP Big data processing systems Finance Treasury Cash Management Kotlin experience Kafka Open Telemetry Network We back you with benefits that support your holistic well-being so you can be and deliver your best. This means caring for you and your loved ones' physical, financial, and mental health, as well as providing the flexibility you need to thrive personally and professionally: Competitive base salaries Bonus incentives Support for financial-well-being and retirement Comprehensive medical, dental, vision, life insurance, and disability benefits (depending on location) Flexible working model with hybrid, onsite or virtual arrangements depending on role and business need Generous paid parental leave policies (depending on your location) Free access to global on-site wellness centers staffed with nurses and doctors (depending on location) Free and confidential counseling support through our Healthy Minds program Career development and training opportunities American Express is an equal opportunity employer and makes employment decisions without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, veteran status, disability status, age, or any other status protected by law. Offer of employment with American Express is conditioned upon the successful completion of a background verification check, subject to applicable laws and regulations.
Posted 2 weeks ago
6.0 - 9.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Job Summary JD AWS PySpark Data Engineer Experience: 6 9 years as a Data Engineer with a strong focus on PySpark and large scale data processing. PySpark Expertise: Decent to proficient in writing optimized PySpark code, including working with DataFrames, Spark SQL, and performing complex transformations. AWS Cloud Proficiency: Fair experience with core AWS services, such as S3, Glue, EMR, Lambda, and Redshift, with the ability to manage and optimize data workflows on AWS. Performance Optimization: Proven ability to optimize PySpark jobs for performance, including experience with partitioning, caching, and handling skewed data. Problem Solving Skills: Strong analytical and problem solving skills, with a focus on troubleshooting data issues and optimizing performance in distributed environments. Communication and Collaboration: Excellent communication skills to work effectively with cross functional teams and clearly document technical processes. Added advantage AWS Glue ETL: Hands on experience with AWS Glue ETL jobs, including creating and managing workflows, handling job bookmarks, and implementing transformations. Database ¿ Good working knowledge of Data warehouse like Redshift.
Posted 2 weeks ago
8.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Description About the Company - We’re Salesforce, the Customer Company, inspiring the future of business with AI+ Data +CRM. Leading with our core values, we help companies across every industry blaze new trails and connect with customers in a whole new way. And, we empower you to be a Trailblazer, too — driving your performance and career growth, charting new paths, and improving the state of the world. If you believe in business as the greatest platform for change and in companies doing well and doing good – you’ve come to the right place. About The Role We’re looking for an experienced Data Scientist who will help us build marketing attribution, causal inference, and uplift models to improve the effectiveness and efficiency of our marketing efforts. This person will also design experiments and help us drive consistent approach to experimentation and campaign measurement to support a range of marketing, customer engagement, and digital use cases. This Lead Data Scientist brings significant experience in designing, developing, and delivering statistical models and AI/ML algorithms for marketing and digital optimization use cases on large-scale data sets in a cloud environment. They show rigor in how they prototype, test, and evaluate algorithm performance both in the testing phase of algorithm development and in managing production algorithms. They demonstrate advanced knowledge of statistical and machine learning techniques along with ensuring the ethical use of data in the algorithm design process. At Salesforce, Trust is our number one value and we expect all applications of statistical and machine learning models to adhere to our values and policies to ensure we balance business needs with responsible uses of technology. Responsibilities As part of the Marketing Effectiveness Data Science team within the Salesforce Marketing Data Science organization, develop statistical and machine learning models to improve marketing effectiveness - e.g., marketing attribution models, causal inference models, uplift models, etc. Develop optimization and simulation algorithms to provide marketing investment and allocation recommendations to improve ROI by optimizing spend across marketing channels. Own the full lifecycle of model development from ideation and data exploration, algorithm design and testing, algorithm development and deployment, to algorithm monitoring and tuning in production. Design experiments to support marketing, customer experience, and digital campaigns and develop statistically sound models to measure impact. Collaborate with other data scientists to develop and operationalize consistent approaches to experimentation and campaign measurement. Be a master in cross-functional collaboration by developing deep relationships with key partners across the company and coordinating with working teams. Constantly learn, have a clear pulse on innovation across the enterprise SaaS, AdTech, paid media, data science, customer data, and analytics communities. Required Skills 8+ years of experience designing models for marketing optimization such as multi-channel attribution models, customer lifetime value models, propensity models, uplift models, etc. using statistical and machine learning techniques. 8+ years of experience using advanced statistical techniques for experiment design (A/B and multi-cell testing) and causal inference methods for understanding business impact. Must have multiple, robust examples of using these techniques to measure effectiveness of marketing efforts and to solve business problems on large-scale data sets. 8+ years of experience with one or more programming languages such as Python, R, PySpark, Java. Expert-level knowledge of SQL with strong data exploration and manipulation skills. Experience using cloud platforms such as GCP and AWS for model development and operationalization is preferred. Must have superb quantitative reasoning and interpretation skills with strong ability to provide analysis-driven business insight and recommendations. Excellent written and verbal communication skills; ability to work well with peers and leaders across data science, marketing, and engineering organizations. Creative problem-solver who simplifies problems to their core elements. B2B customer data experience a big plus. Advanced Salesforce product knowledge is also a plus.
Posted 2 weeks ago
3.0 years
0 Lacs
Gurugram, Haryana, India
On-site
At EY, you’ll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture and technology to become the best version of you. And we’re counting on your unique voice and perspective to help EY become even better, too. Join us and build an exceptional experience for yourself, and a better working world for all. EY’s GDS Tax Technology team’s mission is to develop, implement and integrate technology solutions that better serve our clients and engagement teams. As a member of EY’s core Tax practice, you’ll develop a deep tax technical knowledge and outstanding database, data analytics and programming skills. Ever-increasing regulations require tax departments to gather, organize and analyse more data than ever before. Often the data necessary to satisfy these ever-increasing and complex regulations must be collected from a variety of systems and departments throughout an organization. Effectively and efficiently handling the variety and volume of data is often extremely challenging and time consuming for a company. EY's GDS Tax Technology team members work side-by-side with the firm's partners, clients and tax technical subject matter experts to develop and incorporate technology solutions that enhance value-add, improve efficiencies and enable our clients with disruptive and market leading tools supporting Tax. GDS Tax Technology works closely with clients and professionals in the following areas: Federal Business Tax Services, Partnership Compliance, Corporate Compliance, Indirect Tax Services, Human Capital, and Internal Tax Services. GDS Tax Technology provides solution architecture, application development, testing and maintenance support to the global TAX service line both on a pro-active basis and in response to specific requests. EY is currently seeking a Data Engineer - Staff to join our Tax Technology practice in India. Key Responsibilities Must have experience Azure Databricks. Must have strong knowledge of Python and PySpark programing. Must have strong Azure SQL Database and Azure SQL Datawarehouse concepts. Develops, maintains, and optimizes all data layer components for new and existing systems, including databases, stored procedures, ETL packages, and SQL queries Experience on Azure data platform offerings Ability to effectively communicate with other team members and stakeholders Qualification & Experience Required Candidates should have between 1.5 and 3 years of experience in Azure Data Platform (Azure Databricks) with strong knowledge of Python and PySpark is required Strong verbal and written communications skills Ability to work as an individual contributor. Experience on Azure Data Factory or SSIS or any other ETL tools. EY | Building a better working world EY exists to build a better working world, helping to create long-term value for clients, people and society and build trust in the capital markets. Enabled by data and technology, diverse EY teams in over 150 countries provide trust through assurance and help clients grow, transform and operate. Working across assurance, consulting, law, strategy, tax and transactions, EY teams ask better questions to find new answers for the complex issues facing our world today.
Posted 2 weeks ago
5.0 - 10.0 years
10 - 20 Lacs
Pune
Remote
Role & responsibilities Key Responsibilities: At least 5 years of experience in data engineering with a strong background on Azure Databricks and Scala/Python. Databricks with knowledge in Pyspark Database: Oracle or any other database Programming: Python with awareness of Streamlit
Posted 2 weeks ago
5.0 years
0 Lacs
Pune, Maharashtra, India
On-site
Eviden, part of the Atos Group, with an annual revenue of circa € 5 billion is a global leader in data-driven, trusted and sustainable digital transformation. As a next generation digital business with worldwide leading positions in digital, cloud, data, advanced computing and security, it brings deep expertise for all industries in more than 47 countries. By uniting unique high-end technologies across the full digital continuum with 47,000 world-class talents, Eviden expands the possibilities of data and technology, now and for generations to come. Role Overview The Senior Tech Lead - AWS Data Engineering leads the design, development and optimization of data solutions on the AWS platform. The jobholder has a strong background in data engineering, cloud architecture, and team leadership, with a proven ability to deliver scalable and secure data systems. Responsibilities Lead the design and implementation of AWS-based data architectures and pipelines. Architect and optimize data solutions using AWS services such as S3, Redshift, Glue, EMR, and Lambda. Provide technical leadership and mentorship to a team of data engineers. Collaborate with stakeholders to define project requirements and ensure alignment with business goals. Ensure best practices in data security, governance, and compliance. Troubleshoot and resolve complex technical issues in AWS data environments. Stay updated on the latest AWS technologies and industry trends. Key Technical Skills & Responsibilities Overall 10+Yrs of Experience in IT Minimum 5-7 years in design and development of cloud data platforms using AWS services Must have experience of design and development of data lake / data warehouse / data analytics solutions using AWS services like S3, Lake Formation, Glue, Athena, EMR, Lambda, Redshift Must be aware about the AWS access control and data security features like VPC, IAM, Security Groups, KMS etc Must be good with Python and PySpark for data pipeline building. Must have data modeling including S3 data organization experience Must have an understanding of hadoop components, No SQL database, graph database and time series database; and AWS services available for those technologies Must have experience of working with structured, semi-structured and unstructured data Must have experience of streaming data collection and processing. Kafka experience is preferred. Experience of migrating data warehouse / big data application to AWS is preferred . Must be able to use Gen AI services (like Amazon Q) for productivity gain Eligibility Criteria Bachelor’s degree in Computer Science, Data Engineering, or a related field. Extensive experience with AWS data services and tools. AWS certification (e.g., AWS Certified Data Analytics - Specialty). Experience with machine learning and AI integration in AWS environments. Strong understanding of data modeling, ETL/ELT processes, and cloud integration. Proven leadership experience in managing technical teams. Excellent problem-solving and communication skills. Our Offering Global cutting-edge IT projects that shape the future of digital and have a positive impact on environment. Wellbeing programs & work-life balance - integration and passion sharing events. Attractive Salary and Company Initiative Benefits Courses and conferences Attractive Salary Hybrid work culture Let’s grow together.
Posted 2 weeks ago
0 years
0 Lacs
Pune, Maharashtra, India
On-site
Eviden, part of the Atos Group, with an annual revenue of circa € 5 billion is a global leader in data-driven, trusted and sustainable digital transformation. As a next generation digital business with worldwide leading positions in digital, cloud, data, advanced computing and security, it brings deep expertise for all industries in more than 47 countries. By uniting unique high-end technologies across the full digital continuum with 47,000 world-class talents, Eviden expands the possibilities of data and technology, now and for generations to come. Roles and Responsibility The Senior Tech Lead - Databricks leads the design, development, and implementation of advanced data solutions. Has To have extensive experience in Databricks, cloud platforms, and data engineering, with a proven ability to lead teams and deliver complex projects. Responsibilities Lead the design and implementation of Databricks-based data solutions. Architect and optimize data pipelines for batch and streaming data. Provide technical leadership and mentorship to a team of data engineers. Collaborate with stakeholders to define project requirements and deliverables. Ensure best practices in data security, governance, and compliance. Troubleshoot and resolve complex technical issues in Databricks environments. Stay updated on the latest Databricks features and industry trends. Key Technical Skills & Responsibilities Experience in data engineering using Databricks or Apache Spark-based platforms. Proven track record of building and optimizing ETL/ELT pipelines for batch and streaming data ingestion. Hands-on experience with Azure services such as Azure Data Factory, Azure Data Lake Storage, Azure Databricks, Azure Synapse Analytics, or Azure SQL Data Warehouse. Proficiency in programming languages such as Python, Scala, SQL for data processing and transformation. Expertise in Spark (PySpark, Spark SQL, or Scala) and Databricks notebooks for large-scale data processing. Familiarity with Delta Lake, Delta Live Tables, and medallion architecture for data lakehouse implementations. Experience with orchestration tools like Azure Data Factory or Databricks Jobs for scheduling and automation. Design and implement the Azure key vault and scoped credentials. Knowledge of Git for source control and CI/CD integration for Databricks workflows, cost optimization, performance tuning. Familiarity with Unity Catalog, RBAC, or enterprise-level Databricks setups. Ability to create reusable components, templates, and documentation to standardize data engineering workflows is a plus. Ability to define best practices, support multiple projects, and sometimes mentor junior engineers is a plus. Must have experience of working with streaming data sources and Kafka (preferred) Eligibility Criteria Bachelor’s degree in Computer Science, Data Engineering, or a related field Extensive experience with Databricks, Delta Lake, PySpark, and SQL Databricks certification (e.g., Certified Data Engineer Professional) Experience with machine learning and AI integration in Databricks Strong understanding of cloud platforms (AWS, Azure, or GCP) Proven leadership experience in managing technical teams Excellent problem-solving and communication skills Our Offering Global cutting-edge IT projects that shape the future of digital and have a positive impact on environment. Wellbeing programs & work-life balance - integration and passion sharing events. Attractive Salary and Company Initiative Benefits Courses and conferences Attractive Salary Hybrid work culture Let’s grow together.
Posted 2 weeks ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
40005 Jobs | Dublin
Wipro
19416 Jobs | Bengaluru
Accenture in India
16187 Jobs | Dublin 2
EY
15356 Jobs | London
Uplers
11435 Jobs | Ahmedabad
Amazon
10613 Jobs | Seattle,WA
Oracle
9462 Jobs | Redwood City
IBM
9313 Jobs | Armonk
Accenture services Pvt Ltd
8087 Jobs |
Capgemini
7830 Jobs | Paris,France