Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
5.0 - 8.0 years
10 - 14 Lacs
Bengaluru
Work from Office
Project Role : Application Lead Project Role Description : Lead the effort to design, build and configure applications, acting as the primary point of contact. Must have skills : SAP BW/4HANA Data Modeling & Development Good to have skills : NAMinimum 5 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As an Application Lead, you will lead the effort to design, build, and configure applications, acting as the primary point of contact. Your typical day will involve collaborating with various teams to ensure that application requirements are met, overseeing the development process, and providing guidance to team members. You will also engage in problem-solving activities, ensuring that the applications are aligned with business objectives and user needs, while maintaining a focus on quality and efficiency throughout the project lifecycle. Roles & Responsibilities:- Expected to be an SME.- Collaborate and manage the team to perform.- Responsible for team decisions.- Engage with multiple teams and contribute on key decisions.- Provide solutions to problems for their immediate team and across multiple teams.- Facilitate knowledge sharing sessions to enhance team capabilities.- Monitor project progress and ensure timely delivery of milestones. Professional & Technical Skills: - Must To Have Skills: Proficiency in SAP BW/4HANA Data Modeling & Development.- Strong understanding of data warehousing concepts and best practices.- Experience with ETL processes and data integration techniques.- Familiarity with reporting tools and data visualization techniques.- Ability to troubleshoot and optimize data models for performance. Additional Information:- The candidate should have minimum 5 years of experience in SAP BW/4HANA Data Modeling & Development.- This position is based at our Bengaluru office.- A 15 years full time education is required. Qualification 15 years full time education
Posted 1 week ago
2.0 - 4.0 years
4 - 8 Lacs
Pune
Work from Office
Project Role : Data Engineer Project Role Description : Design, develop and maintain data solutions for data generation, collection, and processing. Create data pipelines, ensure data quality, and implement ETL (extract, transform and load) processes to migrate and deploy data across systems. Must have skills : Snowflake Data Warehouse Good to have skills : NAMinimum 5 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As a Data Engineer, you will design, develop, and maintain data solutions that facilitate data generation, collection, and processing. Your typical day will involve creating data pipelines, ensuring data quality, and implementing ETL processes to effectively migrate and deploy data across various systems. You will collaborate with cross-functional teams to understand data requirements and deliver solutions that meet business needs, while also troubleshooting any issues that arise in the data flow. Roles & Responsibilities:- Expected to be an SME.- Collaborate and manage the team to perform.- Responsible for team decisions.- Engage with multiple teams and contribute on key decisions.- Provide solutions to problems for their immediate team and across multiple teams.- Mentor junior team members to enhance their skills and knowledge.- Continuously evaluate and improve data processes to enhance efficiency. Professional & Technical Skills: - Must To Have Skills: Proficiency in Snowflake Data Warehouse.- Good To Have Skills: Experience with data modeling and database design.- Strong understanding of ETL processes and data integration techniques.- Familiarity with cloud platforms and data storage solutions.- Experience in performance tuning and optimization of data queries. Additional Information:- The candidate should have minimum 5 years of experience in Snowflake Data Warehouse.- This position is based in Pune.- A 15 years full time education is required. Qualification 15 years full time education
Posted 1 week ago
3.0 - 8.0 years
4 - 8 Lacs
Hyderabad
Work from Office
Project Role : Data Engineer Project Role Description : Design, develop and maintain data solutions for data generation, collection, and processing. Create data pipelines, ensure data quality, and implement ETL (extract, transform and load) processes to migrate and deploy data across systems. Must have skills : Oracle Procedural Language Extensions to SQL (PLSQL) Good to have skills : Google Cloud Platform ArchitectureMinimum 3 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As a Data Engineer, you will design, develop, and maintain data solutions that facilitate data generation, collection, and processing. Your typical day will involve creating data pipelines, ensuring data quality, and implementing ETL processes to migrate and deploy data across various systems. You will collaborate with cross-functional teams to understand data requirements and deliver effective solutions that meet business needs. Additionally, you will monitor and optimize data workflows to enhance performance and reliability, ensuring that data is accessible and usable for stakeholders. Roles & Responsibilities:- Expected to perform independently and become an SME.- Required active participation/contribution in team discussions.- Contribute in providing solutions to work related problems.- Develop and maintain robust data pipelines to support data processing and analytics.- Collaborate with data architects and analysts to design data models that meet business requirements. Professional & Technical Skills: - Must To Have Skills: Proficiency in Oracle Procedural Language Extensions to SQL (PLSQL).- Good To Have Skills: Experience with Google BigQuery, Google Cloud Platform Architecture.- Strong understanding of ETL processes and data integration techniques.- Experience with data quality assurance and data governance practices.- Familiarity with data warehousing concepts and technologies. Additional Information:- The candidate should have minimum 3 years of experience in Oracle Procedural Language Extensions to SQL (PLSQL).- This position is based at our Hyderabad office.- A 15 years full time education is required. Qualification 15 years full time education
Posted 1 week ago
3.0 - 8.0 years
4 - 8 Lacs
Hyderabad
Work from Office
Project Role : Data Engineer Project Role Description : Design, develop and maintain data solutions for data generation, collection, and processing. Create data pipelines, ensure data quality, and implement ETL (extract, transform and load) processes to migrate and deploy data across systems. Must have skills : Oracle Procedural Language Extensions to SQL (PLSQL) Good to have skills : Google BigQuery, Google Cloud Platform ArchitectureMinimum 3 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As a Data Engineer, you will engage in the design, development, and maintenance of data solutions that facilitate data generation, collection, and processing. Your typical day will involve creating data pipelines, ensuring data quality, and implementing ETL processes to effectively migrate and deploy data across various systems. You will collaborate with cross-functional teams to understand data requirements and contribute to the overall data strategy of the organization, ensuring that data solutions are robust, scalable, and aligned with business objectives. Roles & Responsibilities:- Expected to perform independently and become an SME.- Required active participation/contribution in team discussions.- Contribute in providing solutions to work related problems.- Develop and optimize data pipelines to ensure efficient data flow and processing.- Monitor and troubleshoot data quality issues, implementing corrective actions as necessary. Professional & Technical Skills: - Must To Have Skills: Proficiency in Oracle Procedural Language Extensions to SQL (PLSQL).- Good To Have Skills: Experience with Google BigQuery, Google Cloud Platform Architecture.- Strong understanding of ETL processes and data integration techniques.- Experience with data modeling and database design principles.- Familiarity with data warehousing concepts and best practices. Additional Information:- The candidate should have minimum 3 years of experience in Oracle Procedural Language Extensions to SQL (PLSQL).- This position is based at our Hyderabad office.- A 15 years full time education is required. Qualification 15 years full time education
Posted 1 week ago
3.0 - 8.0 years
4 - 8 Lacs
Bengaluru
Work from Office
Project Role : Data Engineer Project Role Description : Design, develop and maintain data solutions for data generation, collection, and processing. Create data pipelines, ensure data quality, and implement ETL (extract, transform and load) processes to migrate and deploy data across systems. Must have skills : Oracle Procedural Language Extensions to SQL (PLSQL) Good to have skills : Google BigQuery, Google Cloud Platform ArchitectureMinimum 3 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As a Data Engineer, you will design, develop, and maintain data solutions that facilitate data generation, collection, and processing. Your typical day will involve creating data pipelines, ensuring data quality, and implementing ETL processes to migrate and deploy data across various systems. You will collaborate with cross-functional teams to understand data requirements and deliver effective solutions that meet business needs. Additionally, you will monitor and optimize data workflows to enhance performance and reliability, ensuring that data is accessible and actionable for stakeholders. Roles & Responsibilities:- Expected to perform independently and become an SME.- Required active participation/contribution in team discussions.- Contribute in providing solutions to work related problems.- Develop and maintain robust data pipelines to support data processing and analytics.- Collaborate with data architects and analysts to design data models that meet business requirements. Professional & Technical Skills: - Must To Have Skills: Proficiency in Oracle Procedural Language Extensions to SQL (PLSQL).- Good To Have Skills: Experience with Google BigQuery, Google Cloud Platform Architecture.- Strong understanding of ETL processes and data integration techniques.- Experience with data quality assurance and data governance practices.- Familiarity with data warehousing concepts and technologies. Additional Information:- The candidate should have minimum 3 years of experience in Oracle Procedural Language Extensions to SQL (PLSQL).- This position is based at our Bengaluru office.- A 15 years full time education is required. Qualification 15 years full time education
Posted 1 week ago
5.0 - 8.0 years
10 - 14 Lacs
Hyderabad
Work from Office
Project Role : Application Lead Project Role Description : Lead the effort to design, build and configure applications, acting as the primary point of contact. Must have skills : Databricks Unified Data Analytics Platform Good to have skills : Databricks Unity CatalogMinimum 5 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As an Application Lead, you will lead the effort to design, build, and configure applications, acting as the primary point of contact. Your typical day will involve collaborating with various teams to ensure that application development aligns with business objectives, overseeing project timelines, and facilitating communication among stakeholders to drive project success. You will also engage in problem-solving activities, ensuring that the applications meet the required standards and functionality while adapting to any changes in project scope or requirements. Roles & Responsibilities:- Expected to be an SME.- Collaborate and manage the team to perform.- Responsible for team decisions.- Engage with multiple teams and contribute on key decisions.- Provide solutions to problems for their immediate team and across multiple teams.- Facilitate training and knowledge sharing sessions to enhance team capabilities.- Monitor project progress and implement necessary adjustments to meet deadlines. Professional & Technical Skills: - Must To Have Skills: Proficiency in Databricks Unified Data Analytics Platform.- Good To Have Skills: Experience with Databricks Unity Catalog.- Strong understanding of data integration and ETL processes.- Experience with cloud computing platforms and services.- Familiarity with data governance and security best practices. Additional Information:- The candidate should have minimum 5 years of experience in Databricks Unified Data Analytics Platform.- This position is based in Hyderabad.- A 15 years full time education is required. Qualification 15 years full time education
Posted 1 week ago
5.0 - 8.0 years
10 - 14 Lacs
Coimbatore
Work from Office
Project Role : Application Lead Project Role Description : Lead the effort to design, build and configure applications, acting as the primary point of contact. Must have skills : PySpark Good to have skills : NAMinimum 5 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As an Application Lead, you will lead the effort to design, build, and configure applications, acting as the primary point of contact. Your typical day will involve collaborating with various teams to ensure project milestones are met, facilitating discussions to address challenges, and guiding your team in implementing effective solutions. You will also engage in strategic planning sessions to align project goals with organizational objectives, ensuring that all stakeholders are informed and involved in the development process. Your role will be pivotal in fostering a collaborative environment that encourages innovation and efficiency in application development. Roles & Responsibilities:- Expected to be an SME.- Collaborate and manage the team to perform.- Responsible for team decisions.- Engage with multiple teams and contribute on key decisions.- Provide solutions to problems for their immediate team and across multiple teams.- Facilitate knowledge sharing sessions to enhance team capabilities.- Monitor project progress and implement necessary adjustments to meet deadlines. Professional & Technical Skills: - Must To Have Skills: Proficiency in PySpark.- Strong understanding of data processing frameworks and distributed computing.- Experience with data integration and ETL processes.- Familiarity with cloud platforms and services related to data processing.- Ability to troubleshoot and optimize performance of applications. Additional Information:- The candidate should have minimum 5 years of experience in PySpark.- This position is based in Coimbatore.- A 15 years full time education is required. Candidate should be ready to work in rotational shift Qualification 15 years full time education
Posted 1 week ago
2.0 - 4.0 years
4 - 8 Lacs
Bengaluru
Work from Office
Project Role : Data Engineer Project Role Description : Design, develop and maintain data solutions for data generation, collection, and processing. Create data pipelines, ensure data quality, and implement ETL (extract, transform and load) processes to migrate and deploy data across systems. Must have skills : Data Engineering Good to have skills : Oracle Procedural Language Extensions to SQL (PLSQL), Google BigQuery, Google Cloud Platform ArchitectureMinimum 5 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As a Data Engineer, you will design, develop, and maintain data solutions that facilitate data generation, collection, and processing. A typical day involves creating data pipelines, ensuring data quality, and implementing ETL processes to migrate and deploy data across various systems. You will collaborate with cross-functional teams to understand data requirements and optimize data workflows, ensuring that the data infrastructure supports the organization's analytical needs effectively. Roles & Responsibilities:- Expected to be an SME.- Collaborate and manage the team to perform.- Responsible for team decisions.- Engage with multiple teams and contribute on key decisions.- Provide solutions to problems for their immediate team and across multiple teams.- Mentor junior team members to enhance their skills and knowledge in data engineering.- Continuously evaluate and improve data processes to enhance efficiency and effectiveness. Professional & Technical Skills: - Must To Have Skills: Proficiency in Data Engineering.- Good To Have Skills: Experience with Oracle Procedural Language Extensions to SQL (PLSQL), Google BigQuery, Google Cloud Platform Architecture.- Strong understanding of data modeling and database design principles.- Experience with data warehousing solutions and data lake architectures.- Familiarity with data integration tools and ETL frameworks. Additional Information:- The candidate should have minimum 5 years of experience in Data Engineering.- This position is based at our Bengaluru office.- A 15 years full time education is required. Qualification 15 years full time education
Posted 1 week ago
2.0 - 4.0 years
4 - 8 Lacs
Chennai
Work from Office
Project Role : Data Engineer Project Role Description : Design, develop and maintain data solutions for data generation, collection, and processing. Create data pipelines, ensure data quality, and implement ETL (extract, transform and load) processes to migrate and deploy data across systems. Must have skills : SAS Base & Macros Good to have skills : NAMinimum 5 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As a Data Engineer, you will design, develop, and maintain data solutions that facilitate data generation, collection, and processing. Your typical day will involve creating data pipelines, ensuring data quality, and implementing ETL processes to migrate and deploy data across various systems. You will collaborate with cross-functional teams to understand data requirements and deliver effective solutions that meet business needs. Roles & Responsibilities:- Expected to be an SME.- Collaborate and manage the team to perform.- Responsible for team decisions.- Engage with multiple teams and contribute on key decisions.- Provide solutions to problems for their immediate team and across multiple teams.- Mentor junior team members to enhance their skills and knowledge.- Continuously evaluate and improve data processes to enhance efficiency. Professional & Technical Skills: - Must To Have Skills: Proficiency in SAS Base & Macros.- Good To Have Skills: Experience with data visualization tools.- Strong understanding of data warehousing concepts and practices.- Experience in developing and maintaining ETL processes.- Familiarity with data quality frameworks and best practices. Additional Information:- The candidate should have minimum 5 years of experience in SAS Base & Macros.- This position is based in Chennai.- A 15 years full time education is required. Qualification 15 years full time education
Posted 1 week ago
2.0 - 4.0 years
4 - 8 Lacs
Pune
Work from Office
Project Role : Data Engineer Project Role Description : Design, develop and maintain data solutions for data generation, collection, and processing. Create data pipelines, ensure data quality, and implement ETL (extract, transform and load) processes to migrate and deploy data across systems. Must have skills : Data Analysis & Interpretation Good to have skills : NAMinimum 2 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As a Data Engineer, you will design, develop, and maintain data solutions that facilitate data generation, collection, and processing. Your typical day will involve creating data pipelines, ensuring data quality, and implementing ETL processes to effectively migrate and deploy data across various systems, contributing to the overall efficiency and reliability of data management within the organization. Roles & Responsibilities:- Expected to perform independently and become an SME.- Required active participation/contribution in team discussions.- Contribute in providing solutions to work related problems.- Collaborate with cross-functional teams to understand data requirements and deliver effective solutions.- Monitor and optimize data pipelines for performance and reliability. Professional & Technical Skills: - Must To Have Skills: Proficiency in Data Analysis & Interpretation.- Strong understanding of data modeling and database design principles.- Experience with ETL tools and data integration techniques.- Familiarity with data visualization tools to present findings effectively.- Knowledge of programming languages such as Python or SQL for data manipulation. Additional Information:- The candidate should have minimum 2 years of experience in Data Analysis & Interpretation.- This position is based at our Pune office.- A 15 years full time education is required. Qualification 15 years full time education
Posted 1 week ago
3.0 - 8.0 years
4 - 8 Lacs
Noida
Work from Office
Project Role : Data Engineer Project Role Description : Design, develop and maintain data solutions for data generation, collection, and processing. Create data pipelines, ensure data quality, and implement ETL (extract, transform and load) processes to migrate and deploy data across systems. Must have skills : Google BigQuery Good to have skills : Microsoft SQL Server, Google Cloud Data ServicesMinimum 3 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As a Data Engineer, you will be responsible for designing, developing, and maintaining data solutions for data generation, collection, and processing. You will create data pipelines, ensure data quality, and implement ETL processes to migrate and deploy data across systems. Roles & Responsibilities:- Expected to perform independently and become an SME.- Required active participation/contribution in team discussions.- Contribute in providing solutions to work related problems.- Develop and maintain data pipelines.- Ensure data quality throughout the data lifecycle.- Implement ETL processes for data migration and deployment.- Collaborate with cross-functional teams to understand data requirements.- Optimize data storage and retrieval processes. Professional & Technical Skills: - Must To Have Skills: Proficiency in Google BigQuery.- Strong understanding of data engineering principles.- Experience with cloud-based data services.- Knowledge of SQL and database management systems.- Hands-on experience with data modeling and schema design. Additional Information:- The candidate should have a minimum of 3 years of experience in Google BigQuery.- This position is based at our Mumbai office.- A 15 years full time education is required. Qualification 15 years full time education
Posted 1 week ago
7.0 - 11.0 years
13 - 18 Lacs
Pune
Work from Office
Project Role : Data Architect Project Role Description : Define the data requirements and structure for the application. Model and design the application data structure, storage and integration. Must have skills : Apache Kafka Good to have skills : Data AnalyticsMinimum 5 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As a Data Architect, you will define the data requirements and structure for the application. Your typical day will involve modeling and designing the application data structure, storage, and integration, ensuring that the architecture aligns with business needs and technical specifications. You will collaborate with various teams to ensure that data flows seamlessly across systems, contributing to the overall efficiency and effectiveness of data management within the organization. Roles & Responsibilities:- Expected to be an SME.- Collaborate and manage the team to perform.- Responsible for team decisions.- Engage with multiple teams and contribute on key decisions.- Provide solutions to problems for their immediate team and across multiple teams.- Facilitate knowledge sharing sessions to enhance team capabilities.- Develop and maintain documentation related to data architecture and design. Professional & Technical Skills: - Must To Have Skills: Proficiency in Apache Kafka.- Good To Have Skills: Experience with Data Analytics.- Strong understanding of data modeling techniques and best practices.- Experience with data integration tools and methodologies.- Familiarity with cloud-based data storage solutions. Additional Information:- The candidate should have minimum 5 years of experience in Apache Kafka.- This position is based at our Pune office.- A 15 years full time education is required. Qualification 15 years full time education
Posted 1 week ago
0 years
6 - 10 Lacs
Gurgaon
On-site
Optum is a global organization that delivers care, aided by technology to help millions of people live healthier lives. The work you do with our team will directly improve health outcomes by connecting people with the care, pharmacy benefits, data and resources they need to feel their best. Here, you will find a culture guided by inclusion, talented peers, comprehensive benefits and career development opportunities. Come make an impact on the communities we serve as you help us advance health optimization on a global scale. Join us to start Caring. Connecting. Growing together. Primary Responsibilities: Lead all phases of data engineering, including requirements analysis, data modeling, pipeline design, development, and testing Design and implement performance and operational enhancements for scalable data systems Develop reusable data components, frameworks, and patterns to accelerate team productivity and innovation Conduct code reviews and provide feedback aligned with data engineering best practices and performance optimization Ensure data solutions meet standards for quality, scalability, security, and maintainability through rigorous design and code reviews Actively participate in Agile/Scrum ceremonies to deliver high-quality data solutions Collaborate with software engineers, data analysts, and business stakeholders across Agile teams Troubleshoot and resolve production issues post-deployment, designing robust solutions as needed Design, develop, test, and document data pipelines and ETL processes, enhancing existing components to meet evolving business needs Partner with architecture teams to drive forward-thinking data platform solutions Contribute to the design and architecture of secure, scalable, and maintainable data systems, clearly communicating design decisions to technical leadership Mentor junior engineers and collaborate on solution design with team members and product owners Comply with the terms and conditions of the employment contract, company policies and procedures, and any and all directives (such as, but not limited to, transfer and/or re-assignment to different work locations, change in teams and/or work shifts, policies in regards to flexibility of work benefits and/or work environment, alternative work arrangements, and other decisions that may arise due to the changing business environment). The Company may adopt, vary or rescind these policies and directives in its absolute discretion and without any limitation (implied or otherwise) on its ability to do so Required Qualifications: Bachelor’s degree or equivalent experience Hands-on experience with cloud data services (AWS, Azure, or GCP) Experience building and maintaining ETL/ELT pipelines in enterprise environments Experience integrating with RESTful APIs Experience with Agile methodologies (Scrum, Kanban) Knowledge of data governance, security, privacy, and vulnerability management Understanding of authorization protocols (OAuth) and API integration Solid proficiency in SQL, NoSQL, and data modeling Proficiency with open-source tools such as Apache Flink, Iceberg, Spark, and PySpark Advanced Python skills for data engineering and data science (beyond Jupyter notebooks) Familiarity with big data technologies such as Spark, Hadoop, and Databricks Ability to build modular, testable, and reusable data solutions Solid grasp of data engineering concepts including: Data Catalogs Data Warehouses Data Lakes (especially Iceberg) Data Dictionaries Preferred Qualifications: Experience with GitHub, Terraform, and GitHub Actions Experience with real-time data streaming (Kafka, Kinesis) Experience with feature engineering and machine learning pipelines (MLOps) Knowledge of data warehousing platforms (Snowflake, Redshift, BigQuery) Familiarity with AWS native data engineering tools: Lambda, Lake Formation, Kinesis (Firehose, Data Streams) Glue (Data Catalog, ETL, Streaming) SageMaker, Athena, Redshift (including Spectrum) Demonstrated ability to mentor and guide junior engineers At UnitedHealth Group, our mission is to help people live healthier lives and make the health system work better for everyone. We believe everyone-of every race, gender, sexuality, age, location and income-deserves the opportunity to live their healthiest life. Today, however, there are still far too many barriers to good health which are disproportionately experienced by people of color, historically marginalized groups and those with lower incomes. We are committed to mitigating our impact on the environment and enabling and delivering equitable care that addresses health disparities and improves health outcomes - an enterprise priority reflected in our mission.
Posted 1 week ago
175.0 years
8 - 10 Lacs
Gurgaon
On-site
At American Express, our culture is built on a 175-year history of innovation, shared values and Leadership Behaviors, and an unwavering commitment to back our customers, communities, and colleagues. As part of Team Amex, you'll experience this powerful backing with comprehensive support for your holistic well-being and many opportunities to learn new skills, develop as a leader, and grow your career. Here, your voice and ideas matter, your work makes an impact, and together, you will help us define the future of American Express Team Overview: Global Credit & Model Risk Oversight, Transaction Monitoring & GRC Capabilities (CMRC) provides independent challenge and ensures that significant Credit and Model risks are properly evaluated and monitored, and Anti-Money Laundering (AML) risks are mitigated through the transaction monitoring program. In addition, CMRC hosts the central product organization responsible for the ongoing maintenance and modernization of GRC platforms and capabilities. How will you make an impact in this role? The AML Data Capabilities team was established with a mission to own and govern data across products – raw data, derivations, organized views to cater for analytics and production use cases and to manage the end-to-end data quality. This team comprises of risk data experts with deep SME knowledge of risk data, systems and processes covering all aspects of customer life cycle. Our mission is to build and support Anti-Money Laundering Transaction Monitoring data and rule needs in collaboration with Strategy and technology partners with focus on our core tenets of Timeliness , Quality and process efficiency. Responsibilities include: Develop and Maintain Organized Data Layers to cater for both Production use cases and Analytics for Transaction Monitoring of Anti-Money Laundering rules. Manage end to end Big Data Integration processes for building key variables from disparate source systems with 100% accuracy and 100% on time delivery Partner closely with Strategy and Modeling teams in building incremental intelligence, with strong emphasis on maintaining globalization and standardization of attribute calculations across portfolios. Partner with Tech teams in designing and building next generation data quality controls. Drive automation initiatives within existing processes and fully optimize delivery effort and processing time Effectively manage relationship with stakeholders across multiple geographies Contribute into evaluating and/or developing right tools, common components, and capabilities Follow industry best agile practices to deliver on key priorities Implementation of defined rules on Lucy platform in order to identify the AML alerts. Ensuring process and actions are logged and support regulatory reporting, documenting the analysis and the rule build in form of qualitative document for relevant stakeholders. Minimum Qualifications Academic Background: Bachelor’s degree with up to 2 year of relevant work experience Strong Hive, SQL skills, knowledge of Big data and related technologies Hands on experience on Hadoop & Shell Scripting is a plus Understanding of Data Architecture & Data Engineering concepts Strong verbal and written communication skills, with the ability to cater to versatile technical and non-technical audience Willingness to Collaborate with Cross-Functional teams to drive validation and project execution Good to have skills - Python / Py-Spark Excellent Analytical & critical thinking with attention to detail Excellent planning and organizations skills including ability to manage inter-dependencies and execute under stringent deadlines Exceptional drive and commitment; ability to work and thrive in in fast changing, results driven environment; and proven ability in handling competing priorities Behavioral Skills/Capabilities: Enterprise Leadership Behaviors Set the Agenda: Ø Ability to apply thought leadership and come up with ideas Ø Take complete perspective into picture while designing solutions Ø Use market best practices to design solutions Bring Others with You: Ø Collaborate with multiple stakeholders and other scrum team to deliver on promise Ø Learn from peers and leaders Ø Coach and help peers Do It the Right Way: Ø Communicate Effectively Ø Be candid and clear in communications Ø Make Decisions Quickly & Effectively Ø Live the company culture and values We back you with benefits that support your holistic well-being so you can be and deliver your best. This means caring for you and your loved ones' physical, financial, and mental health, as well as providing the flexibility you need to thrive personally and professionally: Competitive base salaries Bonus incentives Support for financial-well-being and retirement Comprehensive medical, dental, vision, life insurance, and disability benefits (depending on location) Flexible working model with hybrid, onsite or virtual arrangements depending on role and business need Generous paid parental leave policies (depending on your location) Free access to global on-site wellness centers staffed with nurses and doctors (depending on location) Free and confidential counseling support through our Healthy Minds program Career development and training opportunities American Express is an equal opportunity employer and makes employment decisions without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, veteran status, disability status, age, or any other status protected by law. Offer of employment with American Express is conditioned upon the successful completion of a background verification check, subject to applicable laws and regulations.
Posted 1 week ago
0 years
2 - 7 Lacs
Gurgaon
On-site
JOB DESCRIPTION About KPMG in India KPMG entities in India are professional services firm(s). These Indian member firms are affiliated with KPMG International Limited. KPMG was established in India in August 1993. Our professionals leverage the global network of firms, and are conversant with local laws, regulations, markets and competition. KPMG has offices across India in Ahmedabad, Bengaluru, Chandigarh, Chennai, Gurugram, Jaipur, Hyderabad, Jaipur, Kochi, Kolkata, Mumbai, Noida, Pune, Vadodara and Vijayawada. KPMG entities in India offer services to national and international clients in India across sectors. We strive to provide rapid, performance-based, industry-focused and technology-enabled services, which reflect a shared knowledge of global and local industries and our experience of the Indian business environment. Data Architect (Analytics) – AD Location: NCR (Preferably) Job Summary: Data Architect will be responsible for designing and managing the data architecture for data analytics projects. This role involves ensuring the integrity, availability, and security of data, as well as optimizing data systems to support business intelligence and analytics needs. Key Responsibilities: · Design and implement data architecture solutions to support data analytics and business intelligence initiatives. · Collaborate with stakeholders to understand data requirements and translate them into technical specifications. · Design and implement data systems and infrastructure setups, ensuring scalability, security, and performance. · Develop and maintain data models, data flow diagrams, and data dictionaries. · Ensure data quality, consistency, and security across all data sources and systems. · Optimize data storage and retrieval processes to enhance performance and scalability. · Evaluate and recommend data management tools and technologies. · Provide guidance and support to data engineers and analysts on best practices for data architecture. · Conduct assessments of data systems to identify areas for improvement and optimization. · Understanding of Government of India data governance policies and regulatory requirements. · Hands-on in troubleshooting complex technical problems in production environments Equal employment opportunity information KPMG India has a policy of providing equal opportunity for all applicants and employees regardless of their color, caste, religion, age, sex/gender, national origin, citizenship, sexual orientation, gender identity or expression, disability or other legally protected status. KPMG India values diversity and we request you to submit the details below to support us in our endeavor for diversity. Providing the below information is voluntary and refusal to submit such information will not be prejudicial to you. QUALIFICATIONS Qualifications: · Bachelor's degree in Computer Science, Information Technology, Data Science, or a related field (Master's degree preferred). · Proven experience as a Data Architect or in a similar role, with a focus on data analytics projects. · Strong knowledge of data architecture frameworks and methodologies. · Proficiency in database management systems (e.g., SQL, NoSQL), data warehousing, and ETL processes. · Experience with big data technologies (e.g., Hadoop, Spark) and cloud platforms (e.g., AWS, Azure, Google Cloud). · Certification in data architecture or related fields.
Posted 1 week ago
0 years
10 Lacs
Hyderābād
On-site
Our vision is to transform how the world uses information to enrich life for all . Micron Technology is a world leader in innovating memory and storage solutions that accelerate the transformation of information into intelligence, inspiring the world to learn, communicate and advance faster than ever. Responsibilities include, but not limited to: Strong desire to grow a career as a Data Scientist in highly automated industrial manufacturing doing analysis and machine learning on terabytes and petabytes of diverse datasets. Experience in the areas: statistical modeling, feature extraction and analysis, supervised/unsupervised/semi-supervised learning. Exposure to the semiconductor industry is a plus but not a requirement. Ability to extract data from different databases via SQL and other query languages and applying data cleansing, outlier identification, and missing data techniques. Strong software development skills. Strong verbal and written communication skills. Experience with or desire to learn: Machine learning and other advanced analytical methods Fluency in Python and/or R pySpark and/or SparkR and/or SparklyR Hadoop (Hive, Spark, HBase) Teradata and/or another SQL databases Tensorflow, and/or other statistical software including scripting capability for automating analyses SSIS, ETL Javascript, AngularJS 2.0, Tableau Experience working with time-series data, images, semi-supervised learning, and data with frequently changing distributions is a plus Experience working with Manufacturing Execution Systems (MES) is a plus Existing papers from CVPR, NIPS, ICML, KDD, and other key conferences are plus, but this is not a research position About Micron Technology, Inc. We are an industry leader in innovative memory and storage solutions transforming how the world uses information to enrich life for all . With a relentless focus on our customers, technology leadership, and manufacturing and operational excellence, Micron delivers a rich portfolio of high-performance DRAM, NAND, and NOR memory and storage products through our Micron® and Crucial® brands. Every day, the innovations that our people create fuel the data economy, enabling advances in artificial intelligence and 5G applications that unleash opportunities — from the data center to the intelligent edge and across the client and mobile user experience. To learn more, please visit micron.com/careers All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, veteran or disability status. To request assistance with the application process and/or for reasonable accommodations, please contact hrsupport_india@micron.com Micron Prohibits the use of child labor and complies with all applicable laws, rules, regulations, and other international and industry labor standards. Micron does not charge candidates any recruitment fees or unlawfully collect any other payment from candidates as consideration for their employment with Micron. AI alert : Candidates are encouraged to use AI tools to enhance their resume and/or application materials. However, all information provided must be accurate and reflect the candidate's true skills and experiences. Misuse of AI to fabricate or misrepresent qualifications will result in immediate disqualification. Fraud alert: Micron advises job seekers to be cautious of unsolicited job offers and to verify the authenticity of any communication claiming to be from Micron by checking the official Micron careers website in the About Micron Technology, Inc.
Posted 1 week ago
5.0 - 9.0 years
3 - 9 Lacs
Hyderābād
On-site
Join Amgen’s Mission of Serving Patients At Amgen, if you feel like you’re part of something bigger, it’s because you are. Our shared mission—to serve patients living with serious illnesses—drives all that we do. Since 1980, we’ve helped pioneer the world of biotech in our fight against the world’s toughest diseases. With our focus on four therapeutic areas –Oncology, Inflammation, General Medicine, and Rare Disease– we reach millions of patients each year. As a member of the Amgen team, you’ll help make a lasting impact on the lives of patients as we research, manufacture, and deliver innovative medicines to help people live longer, fuller happier lives. Our award-winning culture is collaborative, innovative, and science based. If you have a passion for challenges and the opportunities that lay within them, you’ll thrive as part of the Amgen team. Join us and transform the lives of patients while transforming your career. [Software Development Engineer-Test II] What you will do Let’s do this. Let’s change the world. In this vital role you will be working closely with product managers, designers, and other engineers to create high-quality, scalable software solutions and automating operations, monitoring system health, and responding to incidents to minimize downtime. Ø Test Automation & Framework Development Design, develop, and maintain scalable test automation frameworks (UI, API, performance). Implement reusable test libraries and utilities to accelerate test development. Ø Test Planning & Execution Collaborate with Product, Development, and DevOps teams to define test strategies, scope, and acceptance criteria. Author, review, and execute automated and manual test cases for new features and bug fixes. Ø Continuous Integration & Deployment Integrate automated tests into CI/CD pipelines (e.g., Jenkins, GitLab CI, Azure DevOps). Monitor build health, triage failures, and work with developers to resolve test stability issues. Ø Defect Management & Reporting Track, document, and prioritize defects; work with cross-functional teams to ensure timely resolution. Generate and present test reports, metrics, and dashboards to leadership. Ø Performance & Security Testing (as applicable) Design and run performance/load tests using tools like JMeter, Gatling, or similar. Collaborate with security teams to integrate automated security scans and address vulnerabilities. What we expect of you We are all different, yet we all use our unique contributions to serve patients. Basic Qualifications: Bachelor’s degree/Master’s degree and 5 to 9 years of Computer Science, IT or related field experience Preferred Qualifications: Functional Skills: Programming in at least one modern language (e.g., Java, C#, Python, JavaScript/TypeScript). Hands-on experience with test automation frameworks (e.g., Selenium, Cypress, Playwright, REST Assured). Familiarity with API testing tools (e.g., Postman, SoapUI) and related libraries. Familiar with testing AI models Solid understanding of CI/CD practices and tools (Jenkins, GitHub Actions, Azure DevOps). Working knowledge of version control systems (Git). Good-to-Have Skills: Strong understanding of cloud platforms (e.g., AWS, GCP, Azure) and containerization technologies (e.g., Docker, Kubernetes) Experience with monitoring and logging tools (e.g., Prometheus, Grafana, Splunk) Experience with data processing tools like Hadoop, Spark, or similar Experience with SAP integration technologies Soft Skills: Excellent analytical and troubleshooting skills Strong verbal and written communication skills Ability to work effectively with global, virtual teams High degree of initiative and self-motivation Ability to manage multiple priorities successfully Team-oriented, with a focus on achieving team goals Strong presentation and public speaking skills What you can expect of us As we work to develop treatments that take care of others, we also work to care for your professional and personal growth and well-being. From our competitive benefits to our collaborative culture, we’ll support your journey every step of the way. In addition to the base salary, Amgen offers competitive and comprehensive Total Rewards Plans that are aligned with local industry standards. Apply now and make a lasting impact with the Amgen team. careers.amgen.com As an organization dedicated to improving the quality of life for people around the world, Amgen fosters an inclusive environment of diverse, ethical, committed and highly accomplished people who respect each other and live the Amgen values to continue advancing science to serve patients. Together, we compete in the fight against serious disease. Amgen is an Equal Opportunity employer and will consider all qualified applicants for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, protected veteran status, disability status, or any other basis protected by applicable law. We will ensure that individuals with disabilities are provided reasonable accommodation to participate in the job application or interview process, to perform essential job functions, and to receive other benefits and privileges of employment. Please contact us to request accommodation.
Posted 1 week ago
5.0 - 9.0 years
7 - 8 Lacs
Hyderābād
On-site
Join Amgen’s Mission of Serving Patients At Amgen, if you feel like you’re part of something bigger, it’s because you are. Our shared mission—to serve patients living with serious illnesses—drives all that we do. Since 1980, we’ve helped pioneer the world of biotech in our fight against the world’s toughest diseases. With our focus on four therapeutic areas –Oncology, Inflammation, General Medicine, and Rare Disease– we reach millions of patients each year. As a member of the Amgen team, you’ll help make a lasting impact on the lives of patients as we research, manufacture, and deliver innovative medicines to help people live longer, fuller happier lives. Our award-winning culture is collaborative, innovative, and science based. If you have a passion for challenges and the opportunities that lay within them, you’ll thrive as part of the Amgen team. Join us and transform the lives of patients while transforming your career. Data Science Engineer What you will do Let’s do this. Let’s change the world. In this vital role We are seeking a highly skilled Machine Learning Engineer with a strong MLOps background to join our team. You will play a pivotal role in building and scaling our machine learning models from development to production. Your expertise in both machine learning and operations will be essential in creating efficient and reliable ML pipelines. Roles & Responsibilities: Collaborate with data scientists to develop, train, and evaluate machine learning models. Build and maintain MLOps pipelines, including data ingestion, feature engineering, model training, deployment, and monitoring. Leverage cloud platforms (AWS, GCP, Azure) for ML model development, training, and deployment. Implement DevOps/MLOps best practices to automate ML workflows and improve efficiency. Develop and implement monitoring systems to track model performance and identify issues. Conduct A/B testing and experimentation to optimize model performance. Work closely with data scientists, engineers, and product teams to deliver ML solutions. Stay updated with the latest trends and advancements What we expect of you We are all different, yet we all use our unique contributions to serve patients. Basic Qualifications: Master's degree / Bachelor's degree and 5 to 9 years [Job Code’s Discipline and/or Sub-Discipline] Functional Skills: Must-Have Skills: Solid foundation in machine learning algorithms and techniques Experience in MLOps practices and tools (e.g., MLflow, Kubeflow, Airflow); Experience in DevOps tools (e.g., Docker, Kubernetes, CI/CD) Proficiency in Python and relevant ML libraries (e.g., TensorFlow, PyTorch, Scikit-learn) Outstanding analytical and problem-solving skills; Ability to learn quickly; Good communication and interpersonal skills Good-to-Have Skills: Experience with big data technologies (e.g., Spark, Hadoop), and performance tuning in query and data processing Experience with data engineering and pipeline development Experience in statistical techniques and hypothesis testing, experience with regression analysis, clustering and classification Knowledge of NLP techniques for text analysis and sentiment analysis Experience in analyzing time-series data for forecasting and trend analysis What you can expect of us As we work to develop treatments that take care of others, we also work to care for your professional and personal growth and well-being. From our competitive benefits to our collaborative culture, we’ll support your journey every step of the way. In addition to the base salary, Amgen offers competitive and comprehensive Total Rewards Plans that are aligned with local industry standards. Apply now and make a lasting impact with the Amgen team. careers.amgen.com As an organization dedicated to improving the quality of life for people around the world, Amgen fosters an inclusive environment of diverse, ethical, committed and highly accomplished people who respect each other and live the Amgen values to continue advancing science to serve patients. Together, we compete in the fight against serious disease. Amgen is an Equal Opportunity employer and will consider all qualified applicants for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, protected veteran status, disability status, or any other basis protected by applicable law. We will ensure that individuals with disabilities are provided reasonable accommodation to participate in the job application or interview process, to perform essential job functions, and to receive other benefits and privileges of employment. Please contact us to request accommodation.
Posted 1 week ago
170.0 years
2 - 8 Lacs
Chennai
On-site
Job ID: 32142 Location: Chennai, IN Area of interest: Business Support, Management & Efficiency Job type: Regular Employee Work style: Office Working Opening date: 2 Jul 2025 Job Summary S&T COO Services is responsible for Bank-wide people data enablement, create methods and data models to inform data-driven decision processes resulting in improved business performance, data platform product management, delivery of operational and regulatory reports, dashboards, data-driven process automation, and the provision of advanced analytics support. To help carry us further we are searching for an experienced Lead, Data Analyst to join our team. The ideal candidate will be highly skilled in all aspects of Data Analytics, including data gathering, cleansing, analysing, interpreting and data transformation to provide insights and Translate analysis into data-driven stories and data visualization expertise to translate complex data into actionable information for a specific audience. Key Responsibilities Strategy Responsible for modelling complex problems and processes using various analytical tools and methods. work with various data sources, transform raw data into meaningful information through reports and visualizations. Build & maintain stable, reliable & cost-effective Reporting & Analytics products capable of supporting current & future stakeholder needs. Business Manages client relationships across multiple geographies and effectively engage with key stakeholders. Manages client service delivery expectations of both standard, Adhoc and project related activities, develop a regular communication framework with the clients. Processes Understanding of data warehousing concepts and relational databases and translate business need into data models supporting business solutions. Experience with data collection, cleaning, transformation, developing workflows and visualization using Dataiku from diverse data sources. Able to automate Dataiku workflows, monitor performance and integrate with cloud platforms and big data technologies and BI tools. Implement machine learning models in DSS for predictive analytics. Strong proficiency in one of the BI tools (MicroStrategy, Tableau) Advanced knowledge of SQL (Joins, subqueries etc.) Working knowledge of Python for data analysis Present findings to stakeholders with clear storytelling and visualizations. Facilitate data cleansing & maintenance, monitor data changes and report anomalies based on insights and business specific knowledge. People & Talent Co-ordination with other COE departments, to drive global process and deliver on the overall HR collective agenda. Provide relevant insights, focus/ highlight key issues with complementing recommendations to help Stakeholders drive strategic business decisions. Risk Management The ability to identify key issues based on the process and put in place appropriate controls and measures. Perform data profiling, validation, and cleansing to ensure data integrity. Governance Adherence to the Group guidelines on Data Security (GDPR). Follow standard approach towards metrics/data in line with Data Dictionary and Data Asset Register Maintain documentation and consistency, ensure data accuracy, integrity, and security. Regulatory & Business Conduct Display exemplary conduct and live by the Group’s Values and Code of Conduct. Take personal responsibility for embedding the highest standards of ethics, including regulatory and business conduct, across Standard Chartered Bank. This includes understanding and ensuring compliance with, in letter and spirit, all applicable laws, regulations, guidelines and the Group Code of Conduct. Effectively and collaboratively identify, escalate, mitigate, and resolve risk, conduct and compliance matters. Key stakeholders Business Heads Global HRBP Management COEs, Risk and Compliance Skills and Experience Dataiku SQL Python Hadoop (HQL) MicroStrategy Tableau Story Telling with Data Qualifications Education Bachelor’s / Master’s degree Training Dataiku, Tabelau, Microstragey, Story Telling Certifications Dataiku, Tabelauf required for the role. Competencies Action Oriented Collaborates Customer Focus Gives Clarity & Guidance Manages Ambiguity Develops Talent Drives Vision & Purpose Nimble Learning Decision Quality Courage Instills Trust Strategic Mindset Technical Competencies: This is a generic competency to evaluate candidate on role-specific technical skills and requirements About Standard Chartered We're an international bank, nimble enough to act, big enough for impact. For more than 170 years, we've worked to make a positive difference for our clients, communities, and each other. We question the status quo, love a challenge and enjoy finding new opportunities to grow and do better than before. If you're looking for a career with purpose and you want to work for a bank making a difference, we want to hear from you. You can count on us to celebrate your unique talents and we can't wait to see the talents you can bring us. Our purpose, to drive commerce and prosperity through our unique diversity, together with our brand promise, to be here for good are achieved by how we each live our valued behaviours. When you work with us, you'll see how we value difference and advocate inclusion. Together we: Do the right thing and are assertive, challenge one another, and live with integrity, while putting the client at the heart of what we do Never settle, continuously striving to improve and innovate, keeping things simple and learning from doing well, and not so well Are better together, we can be ourselves, be inclusive, see more good in others, and work collectively to build for the long term What we offer In line with our Fair Pay Charter, we offer a competitive salary and benefits to support your mental, physical, financial and social wellbeing. Core bank funding for retirement savings, medical and life insurance, with flexible and voluntary benefits available in some locations. Time-off including annual leave, parental/maternity (20 weeks), sabbatical (12 months maximum) and volunteering leave (3 days), along with minimum global standards for annual and public holiday, which is combined to 30 days minimum. Flexible working options based around home and office locations, with flexible working patterns. Proactive wellbeing support through Unmind, a market-leading digital wellbeing platform, development courses for resilience and other human skills, global Employee Assistance Programme, sick leave, mental health first-aiders and all sorts of self-help toolkits A continuous learning culture to support your growth, with opportunities to reskill and upskill and access to physical, virtual and digital learning. Being part of an inclusive and values driven organisation, one that embraces and celebrates our unique diversity, across our teams, business functions and geographies - everyone feels respected and can realise their full potential. Recruitment Assessments Some of our roles use assessments to help us understand how suitable you are for the role you've applied to. If you are invited to take an assessment, this is great news. It means your application has progressed to an important stage of our recruitment process. Visit our careers website www.sc.com/careers www.sc.com/careers
Posted 1 week ago
5.0 years
4 - 6 Lacs
Bengaluru
On-site
Degree, Post graduate in Computer Science or related field (or equivalent industry experience)with background in Mathematics and Statistics Minimum 5+ years of development and design experience in experience as Data Engineer Experience on Big Data platforms and distributed computing (e.g. Hadoop, Map/Reduce, Spark, HBase, Hive) Experience in data pipeline software engineering and best practice in python (linting, unit tests, integration tests, git flow/pull request process, object-oriented development, data validation, algorithms and data structures, technical troubleshooting and debugging, bash scripting ) Experience in Data Quality Assessment (profiling, anomaly detection) and data documentation (schema, dictionaries) Experience in data architecture, data warehousing and modelling techniques (Relational, ETL, OLTP) and consider performance alternatives Used SQL, PL/SQL or T-SQL with RDBMSs production environments, no-SQL databases nice to have Linux OS configuration and use, including shell scripting. Well versed with Agile, DevOps and CI/CD principles (GitHub, Jenkins etc.), and actively involved in solving, troubleshooting issues in distributed services ecosystem Experience in Agile methodology. Ensure quality of technical and application architecture and design of systems across the organization. Effectively research and benchmark technology against other best in class technologies. Experience in Banking, Financial and Fintech experience in an enterprise environment preferred Able to influence multiple teams on technical considerations, increasing their productivity and effectiveness, by sharing deep knowledge and experience. Self-motivator and self-starter, Ability to own and drive things without supervision and works collaboratively with the teams across the organization. Have excellent soft and interpersonal skills to interact and present the ideas to team. The engineer should've good listening skills and speaks clearly in front of team, stakeholders and management. The engineer should always carry positive attitude towards work and establishes effective team relations and builds a climate of trust within the team. Should be enthusiastic and passionate and creates a motivating environment for the team. About Virtusa Teamwork, quality of life, professional and personal development: values that Virtusa is proud to embody. When you join us, you join a team of 27,000 people globally that cares about your growth — one that seeks to provide you with exciting projects, opportunities and work with state of the art technologies throughout your career with us. Great minds, great potential: it all comes together at Virtusa. We value collaboration and the team environment of our company, and seek to provide great minds with a dynamic place to nurture new ideas and foster excellence. Virtusa was founded on principles of equal opportunity for all, and so does not discriminate on the basis of race, religion, color, sex, gender identity, sexual orientation, age, non-disqualifying physical or mental disability, national origin, veteran status or any other basis covered by appropriate law. All employment is decided on the basis of qualifications, merit, and business need.
Posted 1 week ago
9.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Who We Are Wayfair is moving the world so that anyone can live in a home they love – a journey enabled by more than 3,000 Wayfair engineers and a data-centric culture. Wayfair’s Advertising business is rapidly expanding, adding hundreds of millions of dollars in profits to Wayfair. We are building Sponsored Products, Display & Video Ad offerings that cater to a variety of Advertiser goals while showing highly relevant and engaging Ads to millions of customers. We are evolving our Ads Platform to empower advertisers across all sophistication levels to grow their business on Wayfair at a strong, positive ROI and are leveraging state of the art Machine Learning techniques. The Advertising Optimization & Automation Science team is central to this effort. We leverage machine learning and generative AI to streamline campaign workflows, delivering impactful recommendations on budget allocation, target Return on Ad Spend (tROAS), and SKU selection. Additionally, we are developing intelligent systems for creative optimization and exploring agentic frameworks to further simplify and enhance advertiser interactions. We are looking for an experienced Senior Machine Learning Scientist to join the Advertising Optimization & Automation Science team. In this role, you will be responsible for building intelligent, ML-powered systems that drive personalized recommendations and campaign automation within Wayfair’s advertising platform. You will work closely with other scientists, as well as members of our internal Product and Engineering teams, to apply your ML expertise to define and deliver 0-to-1 capabilities that unlock substantial commercial value and directly enhance advertiser outcomes. What You’ll do Design and build intelligent budget, tROAS, and SKU recommendations, and simulation-driven decisioning that extends beyond the current advertising platform capabilities. Lead the next phase of GenAI-powered creative optimization and automation to drive significant incremental ad revenue and improve supplier outcomes. Raise technical standards across the team by promoting best practices in ML system design and development. Partner cross-functionally with Product, Engineering, and Sales to deliver scalable ML solutions that improve supplier campaign performance. Ensure systems are designed for reuse, extensibility, and long-term impact across multiple advertising workflows. Research and apply best practices in advertising science, GenAI applications in creative personalization, and auction modeling. Keep Wayfair at the forefront of innovation in supplier marketing optimization. Collaborate with Engineering teams (AdTech, ML Platform, Campaign Management) to build and scale the infrastructure needed for automated, intelligent advertising decisioning. We Are a Match Because You Have : Bachelor's or Master’s degree in Computer Science, Mathematics, Statistics, or related field. 9+ years of experience in building large scale machine learning algorithms. 4+ years of experience working in an architect or technical leadership position. Strong theoretical understanding of statistical models such as regression, clustering and ML algorithms such as decision trees, neural networks, transformers and NLP techniques. Proficiency in programming languages such as Python and relevant ML libraries (e.g., TensorFlow, PyTorch) to develop production-grade products. Strategic thinker with a customer-centric mindset and a desire for creative problem solving, looking to make a big impact in a growing organization. Demonstrated success influencing senior level stakeholders on strategic direction based on recommendations backed by in-depth analysis; Excellent written and verbal communication. Ability to partner cross-functionally to own and shape technical roadmaps Intellectual curiosity and a desire to always be learning! Nice to have Experience with GCP, Airflow, and containerization (Docker). Experience building scalable data processing pipelines with big data tools such as Hadoop, Hive, SQL, Spark, etc. Familiarity with Generative AI and agentic workflows. Experience in Bayesian Learning, Multi-armed Bandits, or Reinforcement Learning. About Wayfair Inc. Wayfair is one of the world’s largest online destinations for the home. Through our commitment to industry-leading technology and creative problem-solving, we are confident that Wayfair will be home to the most rewarding work of your career. If you’re looking for rapid growth, constant learning, and dynamic challenges, then you’ll find that amazing career opportunities are knocking. No matter who you are, Wayfair is a place you can call home. We’re a community of innovators, risk-takers, and trailblazers who celebrate our differences, and know that our unique perspectives make us stronger, smarter, and well-positioned for success. We value and rely on the collective voices of our employees, customers, community, and suppliers to help guide us as we build a better Wayfair – and world – for all. Every voice, every perspective matters. That’s why we’re proud to be an equal opportunity employer. We do not discriminate on the basis of race, color, ethnicity, ancestry, religion, sex, national origin, sexual orientation, age, citizenship status, marital status, disability, gender identity, gender expression, veteran status, genetic information, or any other legally protected characteristic. We are interested in retaining your data for a period of 12 months to consider you for suitable positions within Wayfair. Your personal data is processed in accordance with our Candidate Privacy Notice (which can found here: https://www.wayfair.com/careers/privacy). If you have any questions regarding our processing of your personal data, please contact us at dataprotectionofficer@wayfair.com. If you would rather not have us retain your data please contact us anytime at dataprotectionofficer@wayfair.com.
Posted 1 week ago
5.0 - 8.0 years
15 - 25 Lacs
Kolkata, Chennai, Bengaluru
Hybrid
Global Gen AI Developer Enabling a software-defined, electrified future. Visteon is a technology company that develops and builds innovative digital cockpit and electrification products at the leading-edge of the mobility revolution. Founded in 2000, Visteon brings decades of automotive intelligence combined with Silicon Valley speed to apply global insights that help transform the software-defined vehicle of the future for many of the worlds largest OEMs. The company employs 10,000 employees in 18 countries around the globe. To know more about us click here. Mission of the Role: Facilitate Enterprise machine learning and artificial intelligence solutions using the latest technologies Visteon is adopting globally. Key Objectives of this Role: The primary goal of the Global ML/AI Developer is to leverage advanced machine learning and artificial intelligence techniques to develop innovative solutions that drive Visteons strategic initiatives. By collaborating with cross-functional teams and stakeholders, this role identifies opportunities for AI-driven improvements, designs and implements scalable ML models, and integrates these models into existing systems to enhance operational efficiency. Following development best practices, fostering a culture of continuous learning, and staying abreast of AI advancements, the Global ML/AI Developer ensures that all AI solutions align with organizational goals, support data-driven decision-making, and continuously improve Visteons technological capabilities. Qualification, Experience and Skills: 6-8 Yrs Technical Skills: Expertise in machine learning frameworks (e.g., TensorFlow, PyTorch), programming languages (e.g., Python, R, SQL), and data processing tools (e.g., Apache Spark, Hadoop). Proficiency in developing, training, and deploying ML models, including supervised and unsupervised learning, deep learning, and reinforcement learning. Strong understanding of data engineering concepts, including data preprocessing, feature engineering, and data pipeline development. Experience with cloud platforms (preferably Microsoft Azure) for deploying and scaling ML solutions. Business Acumen : Strong business analysis and ability to translate complex technical concepts into actionable business insights and recommendations. Key Behaviors: Innovation: Continuously seeks out new ideas, technologies, and methodologies to improve AI/ML solutions and drive the organization forward. Attention to Detail: Pays close attention to all aspects of the work, ensuring accuracy and thoroughness in data analysis, model development, and documentation. Effective Communication: Clearly and effectively communicates complex technical concepts to non-technical stakeholders, ensuring understanding and alignment across the organization.
Posted 1 week ago
3.0 - 4.0 years
0 Lacs
India
On-site
Description GroundTruth is an advertising platform that turns real-world behavior into marketing that drives in-store visits and other real business results. We use observed real-world consumer behavior, including location and purchase data, to create targeted advertising campaigns across all screens, measure how consumers respond, and uncover unique insights to help optimize ongoing and future marketing efforts. With this focus on media, measurement, and insights, we provide marketers with tools to deliver media campaigns that drive measurable impact, such as in-store visits, sales, and more. Learn more at groundtruth.com. We believe that innovative technology starts with the best talent and have been ranked one of Ad Age’s Best Places to Work in 2021, 2022, 2023 & 2025! Learn more about the perks of joining our team here. About Team GroundTruth seeks a Data Engineering Software Engineer to join our Attribution team. The Attribution Team specialises in designing and managing data pipelines that capture and connect user engagement data to optimise ad performance. We engineer scalable solutions for accurate and real-world attribution across the GroundTruth ecosystem. We engineer seamless data flows that fuel reliable analytics and decision-making using big data technologies, such as MapReduce, Spark, and Glue. We take pride in building an Engineering Team composed of strong communicators who collaborate with multiple business and engineering stakeholders to find compromises and solutions. Our engineers are organised and detail-oriented team players who are problem solvers with a maker mindset. As a Software Engineer (SE) on our Integration Team, you will build solutions that add new capabilities to our platform. You Will Create and maintain various ingestion pipelines for the GroundTruth platform. Build the infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data sources using SQL and AWS ‘big data’ technologies. Work with stakeholders, including the Product, Analytics and Client Services teams to assist with data-related technical issues and support their data infrastructure needs. Prepare detailed specifications and low-level design. Participate in code reviews. Test the product in controlled, real situations before going live. Maintain the application once it is live. Contribute ideas to improve the data platform. You Have Tech./B.E./M.Tech./MCA or equivalent in computer science 3-4 years of experience in Software Engineering Experience with data ingestion pipeline. Experience with AWS Stack used for Data engineering EC2, S3, EMR, ECS, Lambda, and Step functions Hands-on experience with Python/Java for the orchestration of data pipelines Experience in writing analytical queries using SQL Experience in Airflow Experience in Docker Proficient in Git How can you impress us? Knowledge of REST APIs. Any experience with big data technologies like Hadoop, MapReduce, and Pig is a plus Knowledge of shell scripting. Experience with BI tools like Looker. Experience with DB maintenance. Experience with Amazon Web Services and Docker. Configuration management and QA practices. Benefits At GroundTruth, we want our employees to be comfortable with their benefits so they can focus on doing the work they love. Parental leave- Maternity and Paternity Flexible Time Offs (Earned Leaves, Sick Leaves, Birthday leave, Bereavement leave & Company Holidays) In Office Daily Catered Breakfast, Lunch, Snacks and Beverages Health cover for any hospitalization. Covers both nuclear family and parents Tele-med for free doctor consultation, discounts on health checkups and medicines Wellness/Gym Reimbursement Pet Expense Reimbursement Childcare Expenses and reimbursements Employee referral program Education reimbursement program Skill development program Cell phone reimbursement (Mobile Subsidy program). Internet reimbursement/Postpaid cell phone bill/or both. Birthday treat reimbursement Employee Provident Fund Scheme offering different tax saving options such as Voluntary Provident Fund and employee and employer contribution up to 12% Basic Creche reimbursement Co-working space reimbursement National Pension System employer match Meal card for tax benefit Special benefits on salary account
Posted 1 week ago
7.0 - 11.0 years
0 Lacs
India
Remote
JD: AWS Data Engineer Exp Range: 7 to 11 Years Location: Remote Shift Timings: 12 PM to 9 PM Primary Skills: Python, Pyspark, SQL, AWS JD Responsibilities Data Architecture: Develop and maintain the overall data architecture, ensuring scalability, performance, and data quality. AWS Data Services: Expertise in using AWS data services such as AWS Glue, S3, SNS, SES, Dynamo DB, Redshift, Cloud formation, Cloud watch, IAM, DMS, Event bridge scheduler etc. Data Warehousing: Design and implement data warehouses on AWS, leveraging AWS Redshift or other suitable options. Data Lakes: Build and manage data lakes on AWS using AWS S3 and other relevant services. Data Pipelines: Design and develop efficient data pipelines to extract, transform, and load data from various sources. Data Quality: Implement data quality frameworks and best practices to ensure data accuracy, completeness, and consistency. Cloud Optimization: Optimize data engineering solutions for performance, cost-efficiency, and scalability on the AWS cloud. Team Leadership: Mentor and guide data engineers, ensuring they adhere to best practices and meet project deadlines. Qualifications Bachelor’s degree in computer science, Engineering, or a related field. 6-7 years of experience in data engineering roles, with a focus on AWS cloud platforms. Strong understanding of data warehousing and data lake concepts. Proficiency in SQL and at least one programming language (Python/Pyspark). Good to have - Experience with any big data technologies like Hadoop, Spark, and Kafka. Knowledge of data modeling and data quality best practices. Excellent problem-solving, analytical, and communication skills. Ability to work independently and as part of a team. Preferred Qualifications Certifications in AWS Certified Data Analytics - Specialty or AWS Certified Solutions Architect - Data. If Intrested. Please submit your CV to Khushboo@Sourcebae.com or share it via WhatsApp at 8827565832 khuStay updated with our latest job opportunities and company news by following us on LinkedIn: :https://www.linkedin.com/company/sourcebae
Posted 1 week ago
0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Position Overview: We are seeking a talented Data Engineer with expertise in Apache Spark, Python / Java and distributed systems. The ideal candidate will be skilled in creating and managing data pipelines using AWS. Key Responsibilities: Design, develop, and implement data pipelines for ingesting, transforming, and loading data at scale. Utilise Apache Spark for data processing and analysis. Utilise AWS services (S3, Redshift, EMR, Glue) to build and manage efficient data pipelines. Optimise data pipelines for performance and scalability, considering factors like partitioning, bucketing, and caching. Write efficient and maintainable Python code. Implement and manage distributed systems for data processing. Collaborate with cross-functional teams to understand data requirements and deliver optimal solutions. Ensure data quality and integrity throughout the data lifecycle. Qualifications: Proven experience with Apache Spark and Python / Java. Strong knowledge of distributed systems. Proficiency in creating data pipelines with AWS. Excellent problem-solving and analytical skills. Ability to work independently and as part of a team. Bachelor's degree in Computer Science, Information Technology, or a related field (or equivalent experience). Proven experience in designing and developing data pipelines using Apache Spark and Python. Experience with distributed systems concepts (Hadoop, YARN) is a plus. In-depth knowledge of AWS cloud services for data engineering (S3, Redshift, EMR, Glue). Familiarity with data warehousing concepts (data modeling, ETL) is preferred. Strong programming skills in Python (Pandas, NumPy, Scikit-learn are a plus). Experience with data pipeline orchestration tools (Airflow, Luigi) is a plus. Excellent problem-solving and analytical skills. Strong communication and collaboration skills. Preferred Qualifications: Experience with additional AWS services (e.g., AWS Glue, AWS Lambda, Amazon Redshift). Familiarity with data warehousing and ETL processes. Knowledge of data governance and best practices. Have a good understanding of the oops concept. Hands-on experience with SQL database design Experience with Python, SQL, and data visualization/exploration tools
Posted 1 week ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
39817 Jobs | Dublin
Wipro
19388 Jobs | Bengaluru
Accenture in India
15458 Jobs | Dublin 2
EY
14907 Jobs | London
Uplers
11185 Jobs | Ahmedabad
Amazon
10459 Jobs | Seattle,WA
IBM
9256 Jobs | Armonk
Oracle
9226 Jobs | Redwood City
Accenture services Pvt Ltd
7971 Jobs |
Capgemini
7704 Jobs | Paris,France