Get alerts for new jobs matching your selected skills, preferred locations, and experience range.
5.0 years
4 - 8 Lacs
Gurgaon
On-site
Skills: Primary Skills: Enhancements, new development, defect resolution, and production support of ETL development using AWS native services Integration of data sets using AWS services such as Glue and Lambda functions. Utilization of AWS SNS to send emails and alerts Authoring ETL processes using Python and PySpark ETL process monitoring using CloudWatch events Connecting with different data sources like S3 and validating data using Athena. Experience in CI/CD using GitHub Actions Proficiency in Agile methodology Extensive working experience with Advanced SQL and a complex understanding of SQL. Competencies / Experience: Deep technical skills in AWS Glue (Crawler, Data Catalog): 5 years. Hands-on experience with Python and PySpark: 3 years. PL/SQL experience: 3 years CloudFormation and Terraform: 2 years CI/CD GitHub actions: 1 year Experience with BI systems (PowerBI, Tableau): 1 year Good understanding of AWS services like S3, SNS, Secret Manager, Athena, and Lambda: 2 years
Posted 1 week ago
5.0 years
0 Lacs
Gurgaon
On-site
Expedia Group brands power global travel for everyone, everywhere. We design cutting-edge tech to make travel smoother and more memorable, and we create groundbreaking solutions for our partners. Our diverse, vibrant, and welcoming community is essential in driving our success. Why Join Us? To shape the future of travel, people must come first. Guided by our Values and Leadership Agreements, we foster an open culture where everyone belongs, differences are celebrated and know that when one of us wins, we all win. We provide a full benefits package, including exciting travel perks, generous time-off, parental leave, a flexible work model (with some pretty cool offices), and career development resources, all to fuel our employees' passion for travel and ensure a rewarding career journey. We’re building a more open world. Join us. Are you passionate about shaping the future of travel? Do you want to redefine how people plan, search, and book their journeys? At Expedia Group, we believe travel connects the world — and we're driven by the endless possibilities it creates. We’re looking for a skilled and driven Data Scientist III to support our content product teams with deep analytics and decision-making insights. This role collaborates cross-functionally with product, engineering, data, strategy, and other business units to optimize user engagement, conversion, traffic, and support the development of innovative content products. In this role, you will: Deliver actionable insights into customer behavior and identify opportunities to improve our site and app experiences for travelers worldwide. Partner closely with product managers and engineers to design, implement, and analyze A/B tests and other experimentation frameworks. Act as a trusted analytics advisor to cross-functional teams, offering clear communication and updates on project progress and outcomes. Collaborate with data engineering to design and maintain scalable, efficient data pipelines and infrastructure. Develop and maintain robust dashboards and automated reporting tools to streamline recurring analysis. Frame complex business problems, extract and analyze data, and present key findings to leadership and stakeholders. Own and drive high-impact analytical projects from conception through execution. Experience and qualifications: 5+ years of experience in quantitative analysis, with a passion for tackling complex business problems. Proficient in SQL and Excel, with hands-on experience using Python and/or PySpark for data transformation, analysis, and visualization. Familiar with machine learning techniques, including supervised and unsupervised models. Experienced in advanced analytics methods such as predictive modeling, hypothesis testing, A/B testing, and quasi-experimental design. Skilled in data visualization using tools like Tableau. Experience with Adobe Analytics or Google Analytics is a plus. Able to thrive in a fast-paced, dynamic environment, balancing multiple priorities with strong ownership and proactive problem-solving skills. Accommodation requests If you need assistance with any part of the application or recruiting process due to a disability, or other physical or mental health conditions, please reach out to our Recruiting Accommodations Team through the Accommodation Request. We are proud to be named as a Best Place to Work on Glassdoor in 2024 and be recognized for award-winning culture by organizations like Forbes, TIME, Disability:IN, and others. Expedia Group's family of brands includes: Brand Expedia®, Hotels.com®, Expedia® Partner Solutions, Vrbo®, trivago®, Orbitz®, Travelocity®, Hotwire®, Wotif®, ebookers®, CheapTickets®, Expedia Group™ Media Solutions, Expedia Local Expert®, CarRentals.com™, and Expedia Cruises™. © 2024 Expedia, Inc. All rights reserved. Trademarks and logos are the property of their respective owners. CST: 2029030-50 Employment opportunities and job offers at Expedia Group will always come from Expedia Group’s Talent Acquisition and hiring teams. Never provide sensitive, personal information to someone unless you’re confident who the recipient is. Expedia Group does not extend job offers via email or any other messaging tools to individuals with whom we have not made prior contact. Our email domain is @expediagroup.com. The official website to find and apply for job openings at Expedia Group is careers.expediagroup.com/jobs. Expedia is committed to creating an inclusive work environment with a diverse workforce. All qualified applicants will receive consideration for employment without regard to race, religion, gender, sexual orientation, national origin, disability or age.
Posted 1 week ago
0 years
0 Lacs
Greater Kolkata Area
On-site
Line of Service Advisory Industry/Sector Not Applicable Specialism Data, Analytics & AI Management Level Senior Associate Job Description & Summary At PwC, our people in data and analytics focus on leveraging data to drive insights and make informed business decisions. They utilise advanced analytics techniques to help clients optimise their operations and achieve their strategic goals. In business intelligence at PwC, you will focus on leveraging data and analytics to provide strategic insights and drive informed decision-making for clients. You will develop and implement innovative solutions to optimise business performance and enhance competitive advantage. Why PWC At PwC, you will be part of a vibrant community of solvers that leads with trust and creates distinctive outcomes for our clients and communities. This purpose-led and values-driven work, powered by technology in an environment that drives innovation, will enable you to make a tangible impact in the real world. We reward your contributions, support your wellbeing, and offer inclusive benefits, flexibility programmes and mentorship that will help you thrive in work and life. Together, we grow, learn, care, collaborate, and create a future of infinite experiences for each other. Learn more about us. At PwC, we believe in providing equal employment opportunities, without any discrimination on the grounds of gender, ethnic background, age, disability, marital status, sexual orientation, pregnancy, gender identity or expression, religion or other beliefs, perceived differences and status protected by law. We strive to create an environment where each one of our people can bring their true selves and contribute to their personal growth and the firm’s growth. To enable this, we have zero tolerance for any discrimination and harassment based on the above considerations. " Job Description & Summary: A career within…. A career within Data and Analytics services will provide you with the opportunity to help organisations uncover enterprise insights and drive business results using smarter data analytics. We focus on a collection of organisational technology capabilities, including business intelligence, data management, and data assurance that help our clients drive innovation, growth, and change within their organisations in order to keep up with the changing nature of customers and technology. We make impactful decisions by mixing mind and machine to leverage data, understand and navigate risk, and help our clients gain a competitive edge. Responsibilities Design, develop, and optimize data pipelines and ETL processes using PySpark or Scala to extract, transform, and load large volumes of structured and unstructured data from diverse sources. Implement data ingestion, processing, and storage solutions on Azure cloud platform, leveraging services such as Azure Databricks, Azure Data Lake Storage, and Azure Synapse Analytics. Develop and maintain data models, schemas, and metadata to support efficient data access, query performance, and analytics requirements. Monitor pipeline performance, troubleshoot issues, and optimize data processing workflows for scalability, reliability, and cost-effectiveness. Implement data security and compliance measures to protect sensitive information and ensure regulatory compliance. Requirement Proven experience as a Data Engineer, with expertise in building and optimizing data pipelines using PySpark, Scala, and Apache Spark. Hands-on experience with cloud platforms, particularly Azure, and proficiency in Azure services such as Azure Databricks, Azure Data Lake Storage, Azure Synapse Analytics, and Azure SQL Database. Strong programming skills in Python and Scala, with experience in software development, version control, and CI/CD practices. Familiarity with data warehousing concepts, dimensional modeling, and relational databases (e.g., SQL Server, PostgreSQL, MySQL). Experience with big data technologies and frameworks (e.g., Hadoop, Hive, HBase) is a plus. Mandatory Skill Sets Spark, Pyspark, Azure Preferred Skill Sets Spark, Pyspark, Azure Years Of Experience Required 4 - 8 Education Qualification B.Tech / M.Tech / MBA / MCA Education (if blank, degree and/or field of study not specified) Degrees/Field of Study required: Master of Business Administration, Master of Engineering, Bachelor of Engineering Degrees/Field Of Study Preferred Certifications (if blank, certifications not specified) Required Skills PySpark Optional Skills Accepting Feedback, Accepting Feedback, Active Listening, Analytical Thinking, Business Case Development, Business Data Analytics, Business Intelligence and Reporting Tools (BIRT), Business Intelligence Development Studio, Communication, Competitive Advantage, Continuous Process Improvement, Creativity, Data Analysis and Interpretation, Data Architecture, Database Management System (DBMS), Data Collection, Data Pipeline, Data Quality, Data Science, Data Visualization, Embracing Change, Emotional Regulation, Empathy, Inclusion, Industry Trend Analysis {+ 12 more} Desired Languages (If blank, desired languages not specified) Travel Requirements Available for Work Visa Sponsorship? Government Clearance Required? Job Posting End Date Show more Show less
Posted 1 week ago
25.0 years
2 - 4 Lacs
Cochin
On-site
Company Overview Milestone Technologies is a global IT managed services firm that partners with organizations to scale their technology, infrastructure and services to drive specific business outcomes such as digital transformation, innovation, and operational agility. Milestone is focused on building an employee-first, performance-based culture and for over 25 years, we have a demonstrated history of supporting category-defining enterprise clients that are growing ahead of the market. The company specializes in providing solutions across Application Services and Consulting, Digital Product Engineering, Digital Workplace Services, Private Cloud Services, AI/Automation, and ServiceNow. Milestone culture is built to provide a collaborative, inclusive environment that supports employees and empowers them to reach their full potential. Our seasoned professionals deliver services based on Milestone’s best practices and service delivery framework. By leveraging our vast knowledge base to execute initiatives, we deliver both short-term and long-term value to our clients and apply continuous service improvement to deliver transformational benefits to IT. With Intelligent Automation, Milestone helps businesses further accelerate their IT transformation. The result is a sharper focus on business objectives and a dramatic improvement in employee productivity. Through our key technology partnerships and our people-first approach, Milestone continues to deliver industry-leading innovation to our clients. With more than 3,000 employees serving over 200 companies worldwide, we are following our mission of revolutionizing the way IT is deployed. Job Overview Job Summary: We are seeking a highly experienced and visionary Databricks Data Architect with over 14 years in data engineering and architecture, including deep hands-on experience in designing and scaling Lakehouse architectures using Databricks . The ideal candidate will possess deep expertise across data modeling, data governance, real-time and batch processing, and cloud-native analytics using the Databricks platform. You will lead the strategy, design, and implementation of modern data architecture to drive enterprise-wide data initiatives and maximize the value from the Databricks platform. Key Responsibilities: Lead the architecture, design, and implementation of scalable and secure Lakehouse solutions using Databricks and Delta Lake . Define and implement data modeling best practices , including medallion architecture (bronze/silver/gold layers). Champion data quality and governance frameworks leveraging Databricks Unity Catalog for metadata, lineage, access control, and auditing. Architect real-time and batch data ingestion pipelines using Apache Spark Structured Streaming , Auto Loader , and Delta Live Tables (DLT) . Develop reusable templates, workflows, and libraries for data ingestion, transformation, and consumption across various domains. Collaborate with enterprise data governance and security teams to ensure compliance with regulatory and organizational data standards. Promote self-service analytics and data democratization by enabling business users through Databricks SQL and Power BI/Tableau integrations . Partner with Data Scientists and ML Engineers to enable ML workflows using MLflow , Feature Store , and Databricks Model Serving . Provide architectural leadership for enterprise data platforms, including performance optimization , cost governance , and CI/CD automation in Databricks. Define and drive the adoption of DevOps/MLOps best practices on Databricks using Databricks Repos , Git , Jobs , and Terraform . Mentor and lead engineering teams on modern data platform practices, Spark performance tuning , and efficient Delta Lake optimizations (Z-ordering, OPTIMIZE, VACUUM, etc.) . Technical Skills: 10+ years in Data Warehousing, Data Architecture, and Enterprise ETL design . 5+ years hands-on experience with Databricks on Azure/AWS/GCP , including advanced Apache Spark and Delta Lake . Strong command of SQL, PySpark, and Spark SQL for large-scale data transformation. Proficiency with Databricks Unity Catalog , Delta Live Tables , Autoloader , DBFS , Jobs , and Workflows . Hands-on experience with Databricks SQL and integration with BI tools (Power BI, Tableau, etc.). Experience implementing CI/CD on Databricks , using tools like Git , Azure DevOps , Terraform , and Databricks Repos . Proficient with streaming architecture using Spark Structured Streaming , Kafka , or Event Hubs/Kinesis . Understanding of ML lifecycle management with MLflow , and experience in deploying MLOps solutions on Databricks. Familiarity with cloud object stores (e.g., AWS S3, Azure Data Lake Gen2) and data lake architectures . Exposure to data cataloging and metadata management using Unity Catalog or third-party tools. Knowledge of orchestration tools like Airflow , Databricks Workflows , or Azure Data Factory . Experience with Docker/Kubernetes for containerization (optional, for cross-platform knowledge). Preferred Certifications (a plus): Databricks Certified Data Engineer Associate/Professional Databricks Certified Lakehouse Architect Microsoft Certified: Azure Data Engineer / Azure Solutions Architect AWS Certified Data Analytics – Specialty Google Professional Data Engineer Compensation Estimated Pay Range: Exact compensation and offers of employment are dependent on circumstances of each case and will be determined based on job-related knowledge, skills, experience, licenses or certifications, and location. Our Commitment to Diversity & Inclusion At Milestone we strive to create a workplace that reflects the communities we serve and work with, where we all feel empowered to bring our full, authentic selves to work. We know creating a diverse and inclusive culture that champions equity and belonging is not only the right thing to do for our employees but is also critical to our continued success. Milestone Technologies provides equal employment opportunity for all applicants and employees. All qualified applicants will receive consideration for employment and will not be discriminated against on the basis of race, color, religion, gender, gender identity, marital status, age, disability, veteran status, sexual orientation, national origin, or any other category protected by applicable federal and state law, or local ordinance. Milestone also makes reasonable accommodations for disabled applicants and employees. We welcome the unique background, culture, experiences, knowledge, innovation, self-expression and perspectives you can bring to our global community. Our recruitment team is looking forward to meeting you.
Posted 1 week ago
25.0 years
4 - 6 Lacs
Cochin
On-site
Company Overview Milestone Technologies is a global IT managed services firm that partners with organizations to scale their technology, infrastructure and services to drive specific business outcomes such as digital transformation, innovation, and operational agility. Milestone is focused on building an employee-first, performance-based culture and for over 25 years, we have a demonstrated history of supporting category-defining enterprise clients that are growing ahead of the market. The company specializes in providing solutions across Application Services and Consulting, Digital Product Engineering, Digital Workplace Services, Private Cloud Services, AI/Automation, and ServiceNow. Milestone culture is built to provide a collaborative, inclusive environment that supports employees and empowers them to reach their full potential. Our seasoned professionals deliver services based on Milestone’s best practices and service delivery framework. By leveraging our vast knowledge base to execute initiatives, we deliver both short-term and long-term value to our clients and apply continuous service improvement to deliver transformational benefits to IT. With Intelligent Automation, Milestone helps businesses further accelerate their IT transformation. The result is a sharper focus on business objectives and a dramatic improvement in employee productivity. Through our key technology partnerships and our people-first approach, Milestone continues to deliver industry-leading innovation to our clients. With more than 3,000 employees serving over 200 companies worldwide, we are following our mission of revolutionizing the way IT is deployed. Job Overview In this vital role you will be responsible for the development and implementation of our data strategy. The ideal candidate possesses a strong blend of technical expertise and data-driven problem-solving skills. As a Data Engineer, you will play a crucial role in building, and optimizing our data pipelines and platforms in a SAFE Agile product team. Chip in to the design, development, and implementation of data pipelines, ETL/ELT processes, and data integration solutions. Deliver for data pipeline projects from development to deployment, managing, timelines, and risks. Ensure data quality and integrity through meticulous testing and monitoring. Leverage cloud platforms (AWS, Databricks) to build scalable and efficient data solutions. Work closely with product team, and key collaborators to understand data requirements. Enforce to data engineering industry standards and standards. Experience developing in an Agile development environment, and comfortable with Agile terminology and ceremonies. Familiarity with code versioning using GIT and code migration tools. Familiarity with JIRA. Stay up to date with the latest data technologies and trends What we expect of you Basic Qualifications: Doctorate degree OR Master’s degree and 4 to 6 years of Information Systems experience OR Bachelor’s degree and 6 to 8 years of Information Systems experience OR Diploma and 10 to 12 years of Information Systems experience. Demonstrated hands-on experience with cloud platforms (AWS, Azure, GCP) Proficiency in Python, PySpark, SQL. Development knowledge in Databricks. Good analytical and problem-solving skills to address sophisticated data challenges. Preferred Qualifications: Experienced with data modeling Experienced working with ETL orchestration technologies Experienced with software engineering best-practices, including but not limited to version control (Git, Subversion, etc.), CI/CD (Jenkins, Maven etc.), automated unit testing, and DevOps Familiarity with SQL/NOSQL database Soft Skills: Skilled in breaking down problems, documenting problem statements, and estimating efforts. Effective communication and interpersonal skills to collaborate with multi-functional teams. Excellent analytical and problem solving skills. Strong verbal and written communication skills Ability to work successfully with global teams High degree of initiative and self-motivation. Team-oriented, with a focus on achieving team goals Compensation Estimated Pay Range: Exact compensation and offers of employment are dependent on circumstances of each case and will be determined based on job-related knowledge, skills, experience, licenses or certifications, and location. Our Commitment to Diversity & Inclusion At Milestone we strive to create a workplace that reflects the communities we serve and work with, where we all feel empowered to bring our full, authentic selves to work. We know creating a diverse and inclusive culture that champions equity and belonging is not only the right thing to do for our employees but is also critical to our continued success. Milestone Technologies provides equal employment opportunity for all applicants and employees. All qualified applicants will receive consideration for employment and will not be discriminated against on the basis of race, color, religion, gender, gender identity, marital status, age, disability, veteran status, sexual orientation, national origin, or any other category protected by applicable federal and state law, or local ordinance. Milestone also makes reasonable accommodations for disabled applicants and employees. We welcome the unique background, culture, experiences, knowledge, innovation, self-expression and perspectives you can bring to our global community. Our recruitment team is looking forward to meeting you.
Posted 1 week ago
25.0 years
0 Lacs
Kochi, Kerala, India
On-site
Company Overview Milestone Technologies is a global IT managed services firm that partners with organizations to scale their technology, infrastructure and services to drive specific business outcomes such as digital transformation, innovation, and operational agility. Milestone is focused on building an employee-first, performance-based culture and for over 25 years, we have a demonstrated history of supporting category-defining enterprise clients that are growing ahead of the market. The company specializes in providing solutions across Application Services and Consulting, Digital Product Engineering, Digital Workplace Services, Private Cloud Services, AI/Automation, and ServiceNow. Milestone culture is built to provide a collaborative, inclusive environment that supports employees and empowers them to reach their full potential. Our seasoned professionals deliver services based on Milestone’s best practices and service delivery framework. By leveraging our vast knowledge base to execute initiatives, we deliver both short-term and long-term value to our clients and apply continuous service improvement to deliver transformational benefits to IT. With Intelligent Automation, Milestone helps businesses further accelerate their IT transformation. The result is a sharper focus on business objectives and a dramatic improvement in employee productivity. Through our key technology partnerships and our people-first approach, Milestone continues to deliver industry-leading innovation to our clients. With more than 3,000 employees serving over 200 companies worldwide, we are following our mission of revolutionizing the way IT is deployed. Job Overview In this vital role you will be responsible for the development and implementation of our data strategy. The ideal candidate possesses a strong blend of technical expertise and data-driven problem-solving skills. As a Data Engineer, you will play a crucial role in building, and optimizing our data pipelines and platforms in a SAFE Agile product team. Chip in to the design, development, and implementation of data pipelines, ETL/ELT processes, and data integration solutions. Deliver for data pipeline projects from development to deployment, managing, timelines, and risks. Ensure data quality and integrity through meticulous testing and monitoring. Leverage cloud platforms (AWS, Databricks) to build scalable and efficient data solutions. Work closely with product team, and key collaborators to understand data requirements. Enforce to data engineering industry standards and standards. Experience developing in an Agile development environment, and comfortable with Agile terminology and ceremonies. Familiarity with code versioning using GIT and code migration tools. Familiarity with JIRA. Stay up to date with the latest data technologies and trends Basic Qualifications What we expect of you Doctorate degree OR Master’s degree and 4 to 6 years of Information Systems experience OR Bachelor’s degree and 6 to 8 years of Information Systems experience OR Diploma and 10 to 12 years of Information Systems experience. Demonstrated hands-on experience with cloud platforms (AWS, Azure, GCP) Proficiency in Python, PySpark, SQL. Development knowledge in Databricks. Good analytical and problem-solving skills to address sophisticated data challenges. Preferred Qualifications Experienced with data modeling Experienced working with ETL orchestration technologies Experienced with software engineering best-practices, including but not limited to version control (Git, Subversion, etc.), CI/CD (Jenkins, Maven etc.), automated unit testing, and DevOps Familiarity with SQL/NOSQL database Soft Skills Skilled in breaking down problems, documenting problem statements, and estimating efforts. Effective communication and interpersonal skills to collaborate with multi-functional teams. Excellent analytical and problem solving skills. Strong verbal and written communication skills Ability to work successfully with global teams High degree of initiative and self-motivation. Team-oriented, with a focus on achieving team goals Compensation Estimated Pay Range: Exact compensation and offers of employment are dependent on circumstances of each case and will be determined based on job-related knowledge, skills, experience, licenses or certifications, and location. Our Commitment to Diversity & Inclusion At Milestone we strive to create a workplace that reflects the communities we serve and work with, where we all feel empowered to bring our full, authentic selves to work. We know creating a diverse and inclusive culture that champions equity and belonging is not only the right thing to do for our employees but is also critical to our continued success. Milestone Technologies provides equal employment opportunity for all applicants and employees. All qualified applicants will receive consideration for employment and will not be discriminated against on the basis of race, color, religion, gender, gender identity, marital status, age, disability, veteran status, sexual orientation, national origin, or any other category protected by applicable federal and state law, or local ordinance. Milestone also makes reasonable accommodations for disabled applicants and employees. We welcome the unique background, culture, experiences, knowledge, innovation, self-expression and perspectives you can bring to our global community. Our recruitment team is looking forward to meeting you. Show more Show less
Posted 1 week ago
5.0 years
4 - 9 Lacs
Bengaluru
On-site
Wipro Limited (NYSE: WIT, BSE: 507685, NSE: WIPRO) is a leading technology services and consulting company focused on building innovative solutions that address clients’ most complex digital transformation needs. Leveraging our holistic portfolio of capabilities in consulting, design, engineering, and operations, we help clients realize their boldest ambitions and build future-ready, sustainable businesses. With over 230,000 employees and business partners across 65 countries, we deliver on the promise of helping our customers, colleagues, and communities thrive in an ever-changing world. For additional information, visit us at www.wipro.com. Job Description Role Purpose The purpose of this role is to design, test and maintain software programs for operating systems or applications which needs to be deployed at a client end and ensure its meet 100% quality assurance parameters ͏ Responsibilities: Design and implement the data modeling, data ingestion and data processing for various datasets Design, develop and maintain ETL Framework for various new data source Develop data ingestion using AWS Glue/ EMR, data pipeline using PySpark, Python and Databricks. Build orchestration workflow using Airflow & databricks Job workflow Develop and execute adhoc data ingestion to support business analytics. Proactively interact with vendors for any questions and report the status accordingly Explore and evaluate the tools/service to support business requirement Ability to learn to create a data-driven culture and impactful data strategies. Aptitude towards learning new technologies and solving complex problem. Qualifications: Minimum of bachelor’s degree. Preferably in Computer Science, Information system, Information technology. Minimum 5 years of experience on cloud platforms such as AWS, Azure, GCP. Minimum 5 year of experience in Amazon Web Services like VPC, S3, EC2, Redshift, RDS, EMR, Athena, IAM, Glue, DMS, Data pipeline & API, Lambda, etc. Minimum of 5 years of experience in ETL and data engineering using Python, AWS Glue, AWS EMR /PySpark and Airflow for orchestration. Minimum 2 years of experience in Databricks including unity catalog, data engineering Job workflow orchestration and dashboard generation based on business requirements Minimum 5 years of experience in SQL, Python, and source control such as Bitbucket, CICD for code deployment. Experience in PostgreSQL, SQL Server, MySQL & Oracle databases. Experience in MPP such as AWS Redshift, AWS EMR, Databricks SQL warehouse & compute cluster. Experience in distributed programming with Python, Unix Scripting, MPP, RDBMS databases for data integration Experience building distributed high-performance systems using Spark/PySpark, AWS Glue and developing applications for loading/streaming data into Databricks SQL warehouse & Redshift. Experience in Agile methodology Proven skills to write technical specifications for data extraction and good quality code. Experience with big data processing techniques using Sqoop, Spark, hive is additional plus Experience in data visualization tools including PowerBI, Tableau. Nice to have experience in UI using Python Flask framework anglular ͏ ͏ ͏ Mandatory Skills: Python for Insights. Experience: 5-8 Years. Reinvent your world. We are building a modern Wipro. We are an end-to-end digital transformation partner with the boldest ambitions. To realize them, we need people inspired by reinvention. Of yourself, your career, and your skills. We want to see the constant evolution of our business and our industry. It has always been in our DNA - as the world around us changes, so do we. Join a business powered by purpose and a place that empowers you to design your own reinvention. Come to Wipro. Realize your ambitions. Applications from people with disabilities are explicitly welcome.
Posted 1 week ago
25.0 years
0 Lacs
Kochi, Kerala, India
On-site
Company Overview Milestone Technologies is a global IT managed services firm that partners with organizations to scale their technology, infrastructure and services to drive specific business outcomes such as digital transformation, innovation, and operational agility. Milestone is focused on building an employee-first, performance-based culture and for over 25 years, we have a demonstrated history of supporting category-defining enterprise clients that are growing ahead of the market. The company specializes in providing solutions across Application Services and Consulting, Digital Product Engineering, Digital Workplace Services, Private Cloud Services, AI/Automation, and ServiceNow. Milestone culture is built to provide a collaborative, inclusive environment that supports employees and empowers them to reach their full potential. Our seasoned professionals deliver services based on Milestone’s best practices and service delivery framework. By leveraging our vast knowledge base to execute initiatives, we deliver both short-term and long-term value to our clients and apply continuous service improvement to deliver transformational benefits to IT. With Intelligent Automation, Milestone helps businesses further accelerate their IT transformation. The result is a sharper focus on business objectives and a dramatic improvement in employee productivity. Through our key technology partnerships and our people-first approach, Milestone continues to deliver industry-leading innovation to our clients. With more than 3,000 employees serving over 200 companies worldwide, we are following our mission of revolutionizing the way IT is deployed. Job Overview Job Summary: We are seeking a highly experienced and visionary Databricks Data Architect with over 14 years in data engineering and architecture, including deep hands-on experience in designing and scaling Lakehouse architectures using Databricks . The ideal candidate will possess deep expertise across data modeling, data governance, real-time and batch processing, and cloud-native analytics using the Databricks platform. You will lead the strategy, design, and implementation of modern data architecture to drive enterprise-wide data initiatives and maximize the value from the Databricks platform. Key Responsibilities Lead the architecture, design, and implementation of scalable and secure Lakehouse solutions using Databricks and Delta Lake. Define and implement data modeling best practices, including medallion architecture (bronze/silver/gold layers). Champion data quality and governance frameworks leveraging Databricks Unity Catalog for metadata, lineage, access control, and auditing. Architect real-time and batch data ingestion pipelines using Apache Spark Structured Streaming, Auto Loader, and Delta Live Tables (DLT). Develop reusable templates, workflows, and libraries for data ingestion, transformation, and consumption across various domains. Collaborate with enterprise data governance and security teams to ensure compliance with regulatory and organizational data standards. Promote self-service analytics and data democratization by enabling business users through Databricks SQL and Power BI/Tableau integrations. Partner with Data Scientists and ML Engineers to enable ML workflows using MLflow, Feature Store, and Databricks Model Serving. Provide architectural leadership for enterprise data platforms, including performance optimization, cost governance, and CI/CD automation in Databricks. Define and drive the adoption of DevOps/MLOps best practices on Databricks using Databricks Repos, Git, Jobs, and Terraform. Mentor and lead engineering teams on modern data platform practices, Spark performance tuning, and efficient Delta Lake optimizations (Z-ordering, OPTIMIZE, VACUUM, etc.). Technical Skills 10+ years in Data Warehousing, Data Architecture, and Enterprise ETL design. 5+ years hands-on experience with Databricks on Azure/AWS/GCP, including advanced Apache Spark and Delta Lake. Strong command of SQL, PySpark, and Spark SQL for large-scale data transformation. Proficiency with Databricks Unity Catalog, Delta Live Tables, Autoloader, DBFS, Jobs, and Workflows. Hands-on experience with Databricks SQL and integration with BI tools (Power BI, Tableau, etc.). Experience implementing CI/CD on Databricks, using tools like Git, Azure DevOps, Terraform, and Databricks Repos. Proficient with streaming architecture using Spark Structured Streaming, Kafka, or Event Hubs/Kinesis. Understanding of ML lifecycle management with MLflow, and experience in deploying MLOps solutions on Databricks. Familiarity with cloud object stores (e.g., AWS S3, Azure Data Lake Gen2) and data lake architectures. Exposure to data cataloging and metadata management using Unity Catalog or third-party tools. Knowledge of orchestration tools like Airflow, Databricks Workflows, or Azure Data Factory. Experience with Docker/Kubernetes for containerization (optional, for cross-platform knowledge). Preferred Certifications (a Plus) Databricks Certified Data Engineer Associate/Professional Databricks Certified Lakehouse Architect Microsoft Certified: Azure Data Engineer / Azure Solutions Architect AWS Certified Data Analytics – Specialty Google Professional Data Engineer Compensation Estimated Pay Range: Exact compensation and offers of employment are dependent on circumstances of each case and will be determined based on job-related knowledge, skills, experience, licenses or certifications, and location. Our Commitment to Diversity & Inclusion At Milestone we strive to create a workplace that reflects the communities we serve and work with, where we all feel empowered to bring our full, authentic selves to work. We know creating a diverse and inclusive culture that champions equity and belonging is not only the right thing to do for our employees but is also critical to our continued success. Milestone Technologies provides equal employment opportunity for all applicants and employees. All qualified applicants will receive consideration for employment and will not be discriminated against on the basis of race, color, religion, gender, gender identity, marital status, age, disability, veteran status, sexual orientation, national origin, or any other category protected by applicable federal and state law, or local ordinance. Milestone also makes reasonable accommodations for disabled applicants and employees. We welcome the unique background, culture, experiences, knowledge, innovation, self-expression and perspectives you can bring to our global community. Our recruitment team is looking forward to meeting you. Show more Show less
Posted 1 week ago
0 years
2 - 3 Lacs
Bengaluru
Remote
Bangalore, India Hyderabad, India Chennai, India Job ID: R-1075847 Apply prior to the end date: August 8th, 2025 When you join Verizon You want more out of a career. A place to share your ideas freely — even if they’re daring or different. Where the true you can learn, grow, and thrive. At Verizon, we power and empower how people live, work and play by connecting them to what brings them joy. We do what we love — driving innovation, creativity, and impact in the world. Our V Team is a community of people who anticipate, lead, and believe that listening is where learning begins. In crisis and in celebration, we come together — lifting our communities and building trust in how we show up, everywhere & always. Want in? Join the #VTeamLife. What you’ll be doing... As a DMTS (AI Science) you will own and drive end to end solutions for Cognitive and Gen AI driven use cases. Working on designing and building scalable cognitive and generative AI solutions to meet the needs of given Business engagement. Providing technical thought leadership on model architecture, delivery, monitoring, measurement and model lifecycle best practices. Working in collaborative environment with global teams to drive solutioning of business problems. Developing end to end analytical solutions, and articulating insights to leadership. Provide data-driven recommendations to business by clearly articulating complex modeling concepts through generation and delivery of presentations. Analyzing and model both structured and unstructured data from a number of distributed client and publicly available sources. Assisting with the mentorship and development of Junior members. Drive team towards solutions. Assisting in growing data science practice in Verizon, by meeting business goals through client prospecting, responding to model POC, identifying and closing opportunities within identified Insights, writing white papers, exploring new tools and defining best practices. What we’re looking for... You have strong ML/NLP/GenAI skills and are eager to work in a collaborative environment with global teams to drive NLP/GenAI application in business problems. You work independently and are always willing to learn new technologies. You thrive in a dynamic environment and are able to interact with various stakeholders and cross functional teams to implement data science driven business solutions. You take pride in your role as a data scientist and evangelist and enjoy adding to the systems, concepts and models that enrich the practice. You enjoy mentoring and empowering the team to expand their technical capabilities. You’ll need to have: Bachelor’s degree or four or more years of work experience. Six or more years of work experience. Data Scientist and thought leader with experience in implementing production use cases in Gen AI and Cognitive. Ten or more years of hands-on experience on implementation of large-scale NLP Projects and Fine tuning & Evaluation of LLMs for downstream tasks such as text generation, Classification, summarization, question answering, entity extraction etc. Working knowledge of Agentic AI frameworks like LangChain, LangGraph, CrewAI etc. Ability to guide the team to correctly analyze cognitive insights and leverage unstructured conversational data to create transformative, intelligent, context aware and adaptive AI systems. Experience in Machine Learning, Deep Learning model development & deployment from scratch in Python. Working knowledge of NLP frameworks and libraries like NLTK, Spacy, Transformers, Pytorch, Tensorflow, hugging face API's. Working knowledge of various supervised and unsupervised ML algorithms. Should know the various data preprocessing techniques and its impact on algorithm's accuracy, precision and recall. Knowledge & Implementation Experience of Deep Learning i.e Convolutional Neural Nets (CNN), Recursive Neural Nets (RNN) & Long Short-Term Memory (LSTM), Generative Adversarial Networks (GAN), Deep Reinforcement Learning. Experience with RESTful, JSON API services. Working knowledge on Word embeddings, TF-IDF, Tokenization, N-Grams, Stemmers, lemmatization, Part of speech tagging, entity resolution, ontology, lexicology, phonetics, intents, entities, and context. Experience in analyzing Live Chat/call conversation with agents. Expertise in Python, Sql, PySpark, Scala and/or other languages and tools. Understanding of validation framework for generative model output and perspective on future ready systems to scale validation. Familiarity with GPU/CPU architecture and distributed computing and general infra needs to scale Gen AI models Ability to provide technical thought leadership on model architecture, delivery, monitoring, measurement and model lifecycle best practices. Even better if you have one or more of the following: Phd or an advanced degree or specialization in Artificial Intelligence If Verizon and this role sound like a fit for you, we encourage you to apply even if you don’t meet every “even better” qualification listed above. Where you’ll be working In this hybrid role, you'll have a defined work location that includes work from home and assigned office days set by your manager. Scheduled Weekly Hours 40 Equal Employment Opportunity Verizon is an equal opportunity employer. We evaluate qualified applicants without regard to race, gender, disability or any other legally protected characteristics. Apply Now Save Saved Open sharing options Share Related Jobs Principal Engineer-AI Science Save Bangalore, India, +2 other locations Technology Engineer III Specialist-AI Science Save Hyderabad, India, +2 other locations Technology Senior Engineer Consultant-AI Science Save Chennai, India, +1 other location Technology Shaping the future. Connect with the best and brightest to help innovate and operate some of the world’s largest platforms and networks.
Posted 1 week ago
5.0 - 8.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Looking for experienced Dataiku Developer Engineer with 5 8 years of experience, who can translate ideas and requirements into a full-fledged and scalable Dataiku solutions. Candidates must have exposure to Dataiku 12 and above along with a clear understanding of building custom code and visual recipes. The consultant hired will sit with the business understand the requirement in hand and formulate the desired solution Must Have Experience working with Dataiku flow, recipes, jobs, webappsExperience with Python and Pyspark coding experience. Experience of deploying solutions in Dataiku automation node. Experience building or managing CI/CD pipeline using Jenkins. Show more Show less
Posted 1 week ago
5.0 years
0 Lacs
Karnataka
On-site
WHO YOU’LL WORK WITH This role is part of the Nike’s Content Technology team within Consumer Product and Innovation (CP&I) organization, working very closely with the globally distributed Engineering and Product teams. This role will roll up to the Director Software Engineering based out of Nike India Tech Centre. WHO WE ARE LOOKING FOR We are looking for experienced Technology focused and hands on Lead Engineer to join our team in Bengaluru, India. As a Senior Data Engineer, you will play a key role in ensuring that our data products are robust and capable of supporting our Data Engineering and Business Intelligence initiatives. A data engineer with 5+ years of experience working with cloud-native platforms. Advanced skills in SQL, PySpark, Apache Airflow (or similar workflow management tools), Databricks, and Snowflake. Deep understanding of Spark optimization, Delta Lake, and Medallion architecture. Strong experience in data modeling and data quality practices. Experience with Tableau for data validation and monitoring. Exposure to DevOps practices, CI/CD, Git, and security aspects. Effective mentorship and team collaboration skills. Strong communication skills, able to explain technical concepts clearly. Experience with Kafka or other real-time systems Preferred: Familiarity with ML/GenAI integration into pipelines. Databricks Data Engineer certification. WHAT YOU’LL WORK ON Own and optimize large-scale ETL/ELT pipelines and reusable frameworks. Collaborate with cross-functional teams to translate business requirements into technical solutions. Guide junior engineers through code reviews and design discussions. Monitor data quality, availability, and system performance. Lead CI/CD implementation and improve workflow automation.
Posted 1 week ago
0.0 - 3.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
You Lead the Way. We’ve Got Your Back. With the right backing, people and businesses have the power to progress in incredible ways. When you join Team Amex, you become part of a global and diverse community of colleagues with an unwavering commitment to back our customers, communities and each other. Here, you’ll learn and grow as we help you create a career journey that’s unique and meaningful to you with benefits, programs, and flexibility that support you personally and professionally. At American Express, you’ll be recognized for your contributions, leadership, and impact—every colleague has the opportunity to share in the company’s success. Together, we’ll win as a team, striving to uphold our company values and powerful backing promise to provide the world’s best customer experience every day. And we’ll do it with the utmost integrity, and in an environment where everyone is seen, heard and feels like they belong. Join Team Amex and let's lead the way together. American Express has embarked on an exciting transformation driven by an energetic new team of an inclusive pool of candidates to give all an equal opportunity for growth. Service Operations is responsible for providing reliable platforms for hundreds of critical applications and utilities within American Express Primary focus is to provide technical expertise and tooling to ensure the highest level of reliability and availability for critical applications. Able to provide consultation and strategic recommendations by quickly assessing and remediating complex availability issues. Responsible for driving automation, efficiencies to increase quality, availability, and auto-healing of complex processes. Responsibilities include, but not limited to: The Ideal candidate will be responsible for Designing, Developing and maintaining data pipelines. Serving as a core member of an agile team that drives user story analysis and elaboration, designs and develops responsive web applications using the best engineering practices You will closely work with data scientists, analysts and other partners to ensure the flawless flow of data. You will be Building and optimize reports for analytical and business purpose. Monitor and solve data pipelines issues to ensure smooth operation. Implementing data quality checks and validation process to ensure the accuracy completeness and consistency of data Implementing data governance policies , access controls , and security measures to protect critical data and ensure compliance. Developing deep understanding of integrations with other systems and platforms within the supported domains. Bring a culture of innovation, ideas, and continuous improvement. Challenging status quo, demonstrate risk taking, and implement creative ideas Lead your own time, and work well both independently and as part of a team. Adopt emerging standards while promoting best practices and consistent framework usage. Work with Product Owners to define requirements for new features and plan increments of work. Minimum Qualifications BS or MS degree in computer science, computer engineering, or other technical subject area or equivalent 0 to 3 years of work experience At least 1 to 3 years of hands-on experience with SQL, including schema design, query optimization and performance tuning. Experience with distributed computing frameworks like Hadoop,Hive,Spark for processing large scale data sets. Proficiency in any of the programming language python, pyspark for building data pipeline and automation scripts. Understanding of cloud computing and exposure to Big Query and Airflow to execute DAGs. knowledge of CICD, GIT commands and deployment process. Strong analytical and problem-solving skills, with the ability to troubleshoot complex data issues and optimize data processing workflows Excellent communication and collaboration skills. We back our colleagues and their loved ones with benefits and programs that support their holistic well-being. That means we prioritize their physical, financial, and mental health through each stage of life. Benefits include: Competitive base salaries Bonus incentives Support for financial-well-being and retirement Comprehensive medical, dental, vision, life insurance, and disability benefits (depending on location) Flexible working model with hybrid, onsite or virtual arrangements depending on role and business need Generous paid parental leave policies (depending on your location) Free access to global on-site wellness centers staffed with nurses and doctors (depending on location) Free and confidential counseling support through our Healthy Minds program Career development and training opportunities American Express is an equal opportunity employer and makes employment decisions without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, veteran status, disability status, age, or any other status protected by law. Offer of employment with American Express is conditioned upon the successful completion of a background verification check, subject to applicable laws and regulations. Show more Show less
Posted 1 week ago
5.0 years
0 Lacs
Karnataka
On-site
WHO YOU’LL WORK WITH This role is part of the Nike’s Content Technology team within Consumer Product and Innovation (CP&I) organization, working very closely with the globally distributed Engineering and Product teams. This role will roll up to the Director Software Engineering based out of Nike India Tech Centre. WHO WE ARE LOOKING FOR We are looking for experienced Technology focused and hands on Lead Engineer to join our team in Bengaluru, India. As a Senior Data Engineer, you will play a key role in ensuring that our data products are robust and capable of supporting our Data Engineering and Business Intelligence initiatives. A data engineer with 5+ years of experience working with cloud-native platforms. Advanced skills in SQL, PySpark, Apache Airflow (or similar workflow management tools), Databricks, and Snowflake. Deep understanding of Spark optimization, Delta Lake, and Medallion architecture. Strong experience in data modeling and data quality practices. Experience with Tableau for data validation and monitoring. Exposure to DevOps practices, CI/CD, Git, and security aspects. Effective mentorship and team collaboration skills. Strong communication skills, able to explain technical concepts clearly. Experience with Kafka or other real-time systems, Preferred: Familiarity with ML/GenAI integration into pipelines. Databricks Data Engineer certification. WHAT YOU’LL WORK ON Own and optimize large-scale ETL/ELT pipelines and reusable frameworks. Collaborate with cross-functional teams to translate business requirements into technical solutions. Guide junior engineers through code reviews and design discussions. Monitor data quality, availability, and system performance. Lead CI/CD implementation and improve workflow automation.
Posted 1 week ago
2.0 years
0 Lacs
Karnataka
On-site
WHO YOU’LL WORK WITH This role is part of the Nike’s Content Technology team within Consumer Product and Innovation (CP&I) organization, working very closely with the globally distributed Engineering and Product teams. This role will roll up to the Director Software Engineering based out of Nike India Tech Centre. WHO WE ARE LOOKING FOR We are looking for experienced Technology focused and hands on Lead Engineer to join our team in Bengaluru, India. As a Senior Data Engineer, you will play a key role in ensuring that our data products are robust and capable of supporting our Data Engineering and Business Intelligence initiatives. A data engineer with 2+ years of experience in data engineering. Proficient in SQL, Python, PySpark, and Apache Airflow (or similar workflow management tools). Hands-on experience with Databricks, Snowflake, and cloud platforms (AWS/GCP/Azure). Good understanding of Spark, Delta Lake, Medallion architecture, and ETL/ELT processes. Solid data modeling and data profiling skills. Familiarity with Agile methodologies (Scrum/Kanban). Awareness of DevOps practices in data engineering (automated testing, security administration, workflow orchestration) Exposure to Kafka or real-time data processing Strong communication and collaboration skills. Preferred: familiarity with Tableau or similar BI tools exposure to GenAI/ML pipelines Nice to have: Databricks certifications for data engineer, developer, or Apache Spark. WHAT YOU’LL WORK ON Build and maintain ETL/ELT pipelines and reusable data components. Collaborate with peers and stakeholders to gather data requirements. Participate in code reviews and contribute to quality improvements. Monitor and troubleshoot data pipelines for performance and reliability. Support CI/CD practices in data engineering workflows.
Posted 1 week ago
4.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Job Description Senior Programmer Analyst– B1 Employment Type Permanent Location Chennai Basic Functions 4-6 Years of experience in the enterprise application development & support using Microsoft technologies such as .Net, SQL, C#, MVC, Javascript, Jquery, ReactJS 2+ years of experience in Azure Cloud services such as – Synapse. Data Bricks and data factory, Azure app service, Kubernetes Experience in Data Modeling & Data Integration, Reporting, Data Governance & Security Source code available on Git, Coding champion and so on. Produce scalable and flexible, high-quality code that satisfies both functional and non-functional requirements Develop, deploy, test and maintain technical assets in a highly secure and integrated enterprise computing environment & Support functional testing and UI/UX testing Responsible for participating in architecture, data modeling, and overall design sessions. Co-ordinate with development & business teams to ensure the smooth execution of the project. Collaborate/communicate with on-site project team and business users as required Cross train & mentor team members to encourage knowledge sharing. Essential Functions Strong problem solving and analytical skills and the ability to “roll up your sleeves” and work to create timely solutions and resolutions, to validate, verify, communicate, and resolve application issues. Ability to work on multiple product features simultaneously. Quick learner with ability to understand product’s functionality end to end. Opportunity to try out bleeding edge technologies to provide POC, which will be evaluated and put on use if approved. Experience with Strong knowledge of algorithms, design patterns and fundamental computer science concepts & data structures Experience working in Agile methodologies (SCRUM) environment and familiar with iterative development cycles. Experience implementing authentication, authorization with OAuth and use of Single Sign On, SAML based authentication. Primary Internal Interactions Review with the Overall Product Manager & AVP for improvements in the product development lifecycle Assessment meeting with VP & above for additional product development features. Train & Mentor the juniors in the team Primary External Interactions Communicate with onshore stakeholder & Executive Team Members. Help the Product Management Group set the product roadmap & help in identifying future sellable product features. Client Interactions to better understands expectations & streamline solutions. If required should be a bridge between the client and the technology teams. Skills Technical Skills Required Skills Full stack developer experienced in ASP.net, C#, MVC, Javascript, JQuery, React & SQL server. Azure Cloud – Synapse. Data Bricks and data factory, Azure app service, Kubernetes Experience in migrating on prem application to Azure Cloud Skills Nice To Have Experience on Big Data Tools, not limited to – Python, PySpark, HIVE Expertise in US Healthcare Insurance. Stack overflow account score Technical blogs & technical write-ups Part of any open source contributions Certifications in Agile & Waterfall Methodologies Process Specific Skills Delivery Domain – Product Roadmap Development Business Domain - US Healthcare Insurance & Preventive Analytics Care Optimization Population Management Soft Skills Understanding of Healthcare business vertical and the business terms within Good analytical skills. Strong communication skills - oral and verbal Ability to work with various stakeholders across various geography Excellent Team player as well as an Individual Contributor if required. Working Hours General Shift – 11 AM to 8 PM Will be required to extend as per project release needs Education Requirements Master’s or Bachelor’s degree from top tier colleges with good grades, preferably from an Engineering Background Show more Show less
Posted 1 week ago
4.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Job Description Position Title, Responsbility Level - Senior Programmer Analyst, B1 Location - Chennai Employment Type - Permanent Basic Functions 4+ Years of experience in the enterprise application design, development & support Extensive cloud development and deployment experience Versatile technology experience, mandatorily Microsoft and open-source JS stack (.NET 8.0+, C#, ASP.NET, MVC, WEB API, NodeJS, React, Typescript and SQL Server) Source code knowledge on Git, Coding champion and so on. People management responsibilities: should have handled teams of 2~5 people. Responsible for leading detailed design, end-to-end development (front-end and back-end)/unit testing and integration of applications & Design client-side and server-side architecture Produce scalable and flexible, high-quality code that satisfies both functional and non-functional requirements. Develop, deploy, test and maintain technical assets in a highly secure and integrated enterprise computing environment & Support functional testing and UI/UX testing. Cross train & mentor team members for complete knowledge of technologies. Analyze and translate business requirements to technical design with security and data protection settings. Build features and applications with a mobile responsive design. Collaborate/communicate with on-site project team and business users as required. Would need to lead teams on web applications development that would be interfacing with Big data and analytics platforms. Comprehend the fundamental solution being developed/deployed – its business value, blueprint, how it fits with the overall architecture, risks, and more. Would be required to provide inputs on solution and design effort to build scalable features /functionality. Essential Functions Understanding of the US Healthcare value chain and key impact drivers [Payer and/or Provider] Ability to work on multiple product features simultaneously. Quick learner with ability to understand product’s functionality end to end. Opportunity to try out bleeding edge technologies to provide POC, which will be evaluated and put on use if approved. Experience with Strong knowledge of algorithms, design patterns and fundamental computer science concepts Experience working in Agile methodologies (SCRUM) environment and familiar with iterative development cycles. Experience implementing authentication, authorization with OAuth and use of Single Sign On, SAML based authentication. Familiarity with common stacks Primary Internal Interactions Review with the Technical owners for improvements in the product quality and also meeting delivery dates. Train & Mentor, the juniors in the team Primary External Interactions Communicate with onshore stakeholders from IT, DevOps and Support Team Members as appropriate during development and handling postproduction issues. Skills Technical Skills Required Skills - Full Stack Technologist with extensive knowledge on .NET 8.0+, C#, ASP.NET, MVC, WEB API, SQL, SQL Server, NodeJS, React, Typescript, HTML, CSS, JavaScript, XML, JSON, jQuery, and Design Patterns. Skills Nice To Have Experience on Big Data Tools, not limited to – Python, PySpark, HIVE Expertise in US Healthcare Insurance. Experience on mobile application development Stack overflow account score Technical blogs & technical write-ups Part of any open-source contributions Experience in Cloud & NLP Technologies Certifications in Agile & Waterfall Methodologies Process Specific Skills Business Domain - US Healthcare Insurance & Payer Analytics Care Coordination & Care Optimization Population Health Analytics & Risk Management Member 360 View & Analytics Gaps & Compliance Measures Payer Management & Code Classification Management Utilization & Cost Management Soft Skills Understanding of Healthcare business vertical and the business terms within Good analytical skills. Strong communication skills - oral and verbal Ability to work with various stakeholders across various geography. Excellent Team player, with the ability to build & sustain teams. Should be able to function as an Individual Contributor as well if required. Mentor people and create a high performing organization (foster relationships, resolve conflicts and so on while delivering performance feedback Working Hours General Shift – 11 AM to 8 PM Will be required to extend as per project release needs Education Requirements Master’s / Bachelor’s degree from top tier colleges with good grades, preferably from an Engineering Background Show more Show less
Posted 1 week ago
8.0 years
7 - 10 Lacs
Noida
On-site
Optum is a global organization that delivers care, aided by technology to help millions of people live healthier lives. The work you do with our team will directly improve health outcomes by connecting people with the care, pharmacy benefits, data and resources they need to feel their best. Here, you will find a culture guided by inclusion, talented peers, comprehensive benefits and career development opportunities. Come make an impact on the communities we serve as you help us advance health optimization on a global scale. Join us to start Caring. Connecting. Growing together. We are looking for a highly skilled and experienced Lead Data Analyst to lead data initiatives and deliver actionable insights that drive strategic decisions. The ideal candidate will have deep expertise in data analytics, cloud data platforms, and modern data engineering tools including Databricks, Azure Data Factory, and PySpark. This role requires solid leadership, technical proficiency, and excellent communication skills to collaborate across teams and influence business outcomes. Primary Responsibilities: Lead the design and execution of complex data analysis projects to support business strategy and operations Build and optimize data pipelines using Azure Data Factory and Databricks Perform advanced data analysis and modeling using PySpark, SQL, and Python Develop and maintain dashboards and reports using tools like Power BI, Tableau, or Looker Collaborate with data engineers, product managers, and business stakeholders to define data requirements and deliver insights Ensure data quality, governance, and compliance across all analytics initiatives Mentor junior analysts and foster a data-driven culture within the organization Present findings and recommendations to senior leadership in a clear and compelling manner Comply with the terms and conditions of the employment contract, company policies and procedures, and any and all directives (such as, but not limited to, transfer and/or re-assignment to different work locations, change in teams and/or work shifts, policies in regards to flexibility of work benefits and/or work environment, alternative work arrangements, and other decisions that may arise due to the changing business environment). The Company may adopt, vary or rescind these policies and directives in its absolute discretion and without any limitation (implied or otherwise) on its ability to do so Required Qualifications: 8+ years of experience in data analytics, business intelligence, or a related field Hands-on experience with Databricks and Azure Data Factory Experience with data visualization tools such as Power BI, Tableau, or Looker Proficiency in SQL, Python, and PySpark for data manipulation and analysis Solid understanding of data warehousing, ETL processes, and cloud data platforms (Azure, AWS, or GCP) Proven excellent analytical thinking, problem-solving, and communication skills Proven ability to lead projects and influence stakeholders through data storytelling Preferred Qualifications: Certifications in Azure or other cloud platforms Experience with big data technologies (e.g., Spark, Hadoop) Knowledge of machine learning concepts and tools Familiarity with Agile methodologies and project management tools At UnitedHealth Group, our mission is to help people live healthier lives and make the health system work better for everyone. We believe everyone - of every race, gender, sexuality, age, location and income - deserves the opportunity to live their healthiest life. Today, however, there are still far too many barriers to good health which are disproportionately experienced by people of color, historically marginalized groups and those with lower incomes. We are committed to mitigating our impact on the environment and enabling and delivering equitable care that addresses health disparities and improves health outcomes - an enterprise priority reflected in our mission.
Posted 1 week ago
0 years
0 Lacs
Andhra Pradesh
On-site
Software Engineering Lead Analyst - HIH - Evernorth About Evernorth: Evernorth Health Services, a division of The Cigna Group (NYSE: CI), creates pharmacy, care, and benefits solutions to improve health and increase vitality. We relentlessly innovate to make the prediction, prevention, and treatment of illness and disease more accessible to millions of people. Software Engineering Lead Analyst Position Overview: The Full-stack Data Engineer is responsible for the delivery of a business need end-to-end starting from understanding the requirements to deploying the software into production. This role requires you to be fluent in some of the critical technologies with proficiency in others and have a hunger to learn on the job and add value to the business. Critical attributes of being a full-stack engineer among others is Ownership & Accountability. In addition to Delivery, the full-stack engineer should have an automation first and continuous improvement mindset. He/She should drive the adoption of CI/CD tools and support the improvement of the tools sets/processes. Full stack engineers are able to articulate clear business objectives aligned to technical specifications and work in an iterative, agile pattern daily. They have ownership over their work tasks, and embrace interacting with all levels of the team and raise challenges when necessary. We aim to be cutting-edge engineers – not institutionalized developers. Roles & Responsibilities: Minimize "meetings" to get requirements and have direct business interactions Write referenceable & modular code Design and architect the solution independently Be fluent in particular areas and have proficiency in many areas Have a passion to learn Take ownership and accountability Understands when to automate and when not to Have a desire to simplify Be entrepreneurial / business minded Have a quality mindset, not just code quality but also to ensure ongoing data quality by monitoring data to identify problems before they have a business impact Take risks and champion new ideas Qualifications Primary Skills: Hands on with Python / PySpark programimng - 3yrs+ SQL exp - 2yr+ (NoSQL can also work, but should have SQL 1yrs atleast) Exp working with Cloud Tech - 1yrs+ - Any (AWS preferred) DevOps practices Experience Desired: Experience with Git/SVN Experience with scripting (JavaScript, Python, R, Ruby, Perl, etc.) Experience being part of Agile teams – Scrum or Kanban. Airflow Databricks / Cloud Certifications Additional Skills: Excellent troubleshooting skills Strong communication skills Fluent in BDD and TDD development methodologies Work in an agile CI/CD environment (Jenkins experience a plus) Knowledge and/or experience with Health care information domains is a plus Location & Hours of Work Hyderabad /General Shift (11:30 AM - 8:30 PM IST / 1:00 AM - 10:00 AM EST / 2:00 AM - 11:00 AM EDT) Equal Opportunity Statement Evernorth is an Equal Opportunity Employer actively encouraging and supporting organization-wide involvement of staff in diversity, equity, and inclusion efforts to educate, inform and advance both internal practices and external work with diverse client populations About Evernorth Health Services Evernorth Health Services, a division of The Cigna Group, creates pharmacy, care and benefit solutions to improve health and increase vitality. We relentlessly innovate to make the prediction, prevention and treatment of illness and disease more accessible to millions of people. Join us in driving growth and improving lives.
Posted 1 week ago
0 years
0 Lacs
Andhra Pradesh
On-site
Software Engineering Advisor- HIH - Evernorth About Evernorth: Evernorth Health Services, a division of The Cigna Group (NYSE: CI), creates pharmacy, care, and benefits solutions to improve health and increase vitality. We relentlessly innovate to make the prediction, prevention, and treatment of illness and disease more accessible to millions of people. Position Overview: The Full-stack Data Engineer is responsible for the delivery of a business need end-to-end starting from understanding the requirements to deploying the software into production. This role requires you to be fluent in some of the critical technologies with proficiency in others and have a hunger to learn on the job and add value to the business. Critical attributes of being a full-stack engineer among others is Ownership & Accountability. In addition to Delivery, the full-stack engineer should have an automation first and continuous improvement mindset. He/She should drive the adoption of CI/CD tools and support the improvement of the tools sets/processes. Full stack engineers are able to articulate clear business objectives aligned to technical specifications and work in an iterative, agile pattern daily. They have ownership over their work tasks, and embrace interacting with all levels of the team and raise challenges when necessary. We aim to be cutting-edge engineers – not institutionalized developers. Roles & Responsibilities: Minimize "meetings" to get requirements and have direct business interactions Write referenceable & modular code Design and architect the solution independently Be fluent in particular areas and have proficiency in many areas Have a passion to learn Take ownership and accountability Understands when to automate and when not to Have a desire to simplify Be entrepreneurial / business minded Have a quality mindset, not just code quality but also to ensure ongoing data quality by monitoring data to identify problems before they have a business impact Take risks and champion new ideas Qualifications Primary Skills: Hands on with Python / PySpark programimng - 8yrs+ SQL exp - 8yr+ (NoSQL can also work, but should have SQL 3yrs atleast) Big data technologies such as Databricks Or Snowflake - 1yr+ (strong on theory) Exp working with Cloud Tech - 3yrs+ - Any (AWS preferred) DevOps practices - 2yrs+Python Experience Desired: Experience with Git/SVN Experience with scripting (JavaScript, Python, R, Ruby, Perl, etc.) Experience being part of Agile teams – Scrum or Kanban. Airflow Databricks / Cloud Certifications Additional Skills: Excellent troubleshooting skills Strong communication skills Fluent in BDD and TDD development methodologies Work in an agile CI/CD environment (Jenkins experience a plus) Knowledge and/or experience with Health care information domains is a plus Location & Hours of Work Hyderabad /General Shift (11:30 AM - 8:30 PM IST / 1:00 AM - 10:00 AM EST / 2:00 AM - 11:00 AM EDT) Equal Opportunity Statement Evernorth is an Equal Opportunity Employer actively encouraging and supporting organization-wide involvement of staff in diversity, equity, and inclusion efforts to educate, inform and advance both internal practices and external work with diverse client populations About Evernorth Health Services Evernorth Health Services, a division of The Cigna Group, creates pharmacy, care and benefit solutions to improve health and increase vitality. We relentlessly innovate to make the prediction, prevention and treatment of illness and disease more accessible to millions of people. Join us in driving growth and improving lives.
Posted 1 week ago
0 years
0 Lacs
Gurugram, Haryana, India
On-site
Experience in the design and implementation of big data systems using pyspark, database migrations, transformation, and integration solutions for any Data engineering project. • Must have excellent knowledge in Apache spark and python programming language. • Deep experience in developing data processing tasks using pyspark such as reading data from external sources, merging data, data enrichment and loading into target destinations. • should have experience in integrating pyspark with downstream and upstream applications through real/batch processing interface. • should have experience in fine tuning process and troubleshooting performance issues. • experience in deployment of the code and scheduling tools like Airflow, Control M Show more Show less
Posted 1 week ago
5.0 - 12.0 years
0 Lacs
Gurgaon, Haryana, India
Remote
Job Description Role and Responsibilities: Emphasis is on end-to-end delivery of analysis Extremely comfortable working with data, including managing large number of data sources, analyzing data quality, and pro-actively working with client’s data/ IT teams to resolve issues Hands on experience on Machine Learning algorithms such as Logistic Regression, Random Forest, XG Boost Use variety of analytical tools (Python, SQL, PySpark etc.) to carry out analysis and drive conclusions Reformulate highly technical information into concise, understandable terms for presentations Candidate Profile Extensive experience with Debt Recovery and collections model is required Required skills: Python, SQL, Hive, PySpark, Hadoop, Machine Learning, Credit Risk Modeling, Debt Recovery and Collection Model 5- 12 years of consulting, analytics delivery experience Experience in Banking and Financial Services domain Master’s or Bachelor's degree in math, statistics, economics, computer engineering or related analytics field Very strong analytical skills with the demonstrated ability to research and make decisions based on the day-to-day and complex customer problems required Strong record of achievement, solid analytical ability, and an entrepreneurial hands-on approach to work Outstanding written and verbal communication skills Job Location 2 days work from Office, 3 days work from home Show more Show less
Posted 1 week ago
8.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Position Summary The Senior Data Engineer leads complex data engineering projects working on designing data architectures that align with business requirements This role focuses on optimizing data workflows managing data pipelines and ensuring the smooth operation of data systems Minimum Qualifications 8 Years overall IT experience with minimum 5 years of work experience in below tech skills Tech Skill Strong experience in Python Scripting and PySpark for data processing Proficiency in SQL dealing with big data over Informatica ETL Proven experience in Data quality and data optimization of data lake in Iceberg format with strong understanding of architecture Experience in AWS Glue jobs Experience in AWS cloud platform and its data services S3 Redshift Lambda EMR Airflow Postgres SNS Event bridge Expertise in BASH Shell scripting Strong understanding of healthcare data systems and experience leading data engineering teams Experience in Agile environments Excellent problem solving skills and attention to detail Effective communication and collaboration skills Responsibilities Leads development of data pipelines and architectures that handle large scale data sets Designs constructs and tests data architecture aligned with business requirements Provides technical leadership for data projects ensuring best practices and high quality data solutions Collaborates with product finance and other business units to ensure data pipelines meet business requirements Work with DBT Data Build Tool for transforming raw data into actionable insights Oversees development of data solutions that enable predictive and prescriptive analytics Ensures the technical quality of solutions managing data as it moves across environments Aligns data architecture to Healthfirst solution architecture Show more Show less
Posted 1 week ago
0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Greetings from TCS!!!!!! Role – Big Data Exp: 4+ Location: Chennai Please apply only if you are available for F2F interview . Hands on Experience in Pyspark and Hive Experience in RDBMS Concepts Agile scrum experience, ETL Concepts DevOps - Bit bucket Show more Show less
Posted 1 week ago
5.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Greetings from TCS! Skill: SAS + ETL Testing Years of Experience: 5 - 10 years Location: Pan India Job Description:- Should be strong in Azure and ETL Testing(Highly Importance), SQL and good Knowledge in Data Warehousing (DWH) Concepts Able to work individually and meet the testing delivery expectation from End to End. Able to analyze the requirement, pro-actively identify the scenarios, co-ordinate with business team and get it clarified. Able to understand, convert and verify the business transformation logic into technical terms Should be willing and ready to put in additional effort to learn SAS Should be willing and ready to put in additional effort to learn Python and Pyspark Show more Show less
Posted 1 week ago
0 years
0 Lacs
Kolkata, West Bengal, India
On-site
Role Description Professional experience in business analysis, requirement gathering, and solution workflow design for AI/ML/Analytics projects Experience in LLM/Gen AI – Prompt Engineering, RAG, and parameter hyperparameter tuning Strong programming skills in Python and SQL Solid understanding of ML libraries and applications, such as Time Series Analysis, Neural Networks, SVMs, boosting methods, and implementation using Python Experience in deep learning techniques Excellent communication skills to effectively collaborate with business SMEs and UX teams Intermediate knowledge of DevOps practices, including CI/CD, Automation, Build Management, and Versioning Understanding of Cloud services, including Microsoft Azure, GCP, or Amazon Web Services Experience in PySpark will be an added advantage Responsibilities Excellent problem-solving, critical thinking, and analytical skills Proficient in data fetching, data merging, data wrangling, exploratory data analysis, and feature engineering Strong SQL skills with working knowledge of machine learning algorithms Drive client engagement and align the development team with the technical roadmap Lead teams of data scientists to achieve project outcomes Implement analytical requirements by defining and analyzing system problems, designing and testing standards, and developing solutions Develop solutions by preparing and evaluating alternative workflow options Determine operational objectives by studying business functions, gathering information, and evaluating output requirements and formats Design new analytical solutions by analyzing requirements, constructing workflow charts and diagrams, studying system capabilities, and writing specifications Improve systems by studying current practices and designing modifications Recommend controls by identifying problems and writing improved procedures Define project requirements by identifying milestones, phases, and elements, forming project teams, and establishing project budgets Monitor project progress by tracking activities, resolving problems, publishing progress reports, and recommending actions Maintain system protocols by writing and updating procedures Provide references for users by writing and maintaining user documentation, providing help desk support, and training users Maintain user confidence and protect operations by keeping information confidential Prepare technical reports by collecting, analyzing, and summarizing information and trends Maintain professional and technical knowledge by attending educational workshops, reviewing professional publications, establishing personal networks, benchmarking state-of-the-art practices, and participating in professional societies What We Believe We’re proud to embrace the same values that have shaped UST since the beginning. Since day one, we’ve been building enduring relationships and a culture of integrity. And today, it's those same values that are inspiring us to encourage innovation from everyone to champion diversity and inclusion, and to place people at the centre of everything we do. Humility We will listen, learn, be empathetic and help selflessly in our interactions with everyone. Humanity Through business, we will better the lives of those less fortunate than ourselves. Integrity We honour our commitments and act with responsibility in all our relationships. UST is an Equal Opportunity Employer. All qualified applicants will receive consideration for employment without regard to race, colour, religion, sex, sexual orientation, gender identity, national origin, disability, or status as a protected veteran. UST reserves the right to periodically redefine your roles and responsibilities based on the requirements of the organization and/or your performance. Skills Data Management,Data Science,Python Show more Show less
Posted 1 week ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
Accenture
36723 Jobs | Dublin
Wipro
11788 Jobs | Bengaluru
EY
8277 Jobs | London
IBM
6362 Jobs | Armonk
Amazon
6322 Jobs | Seattle,WA
Oracle
5543 Jobs | Redwood City
Capgemini
5131 Jobs | Paris,France
Uplers
4724 Jobs | Ahmedabad
Infosys
4329 Jobs | Bangalore,Karnataka
Accenture in India
4290 Jobs | Dublin 2