Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
7.0 years
0 Lacs
India
Remote
Who We Are At Twilio, we’re shaping the future of communications, all from the comfort of our homes. We deliver innovative solutions to hundreds of thousands of businesses and empower millions of developers worldwide to craft personalized customer experiences. Our dedication to remote-first work, and strong culture of connection and global inclusion means that no matter your location, you’re part of a vibrant team with diverse experiences making a global impact each day. As we continue to revolutionize how the world interacts, we’re acquiring new skills and experiences that make work feel truly rewarding. Your career at Twilio is in your hands. See Yourself at Segment Join us as our next Staff Data Engineer (L4) on the Segment Data Platform team. About The Job As a Staff Data Engineer, you will play a key role in building and maintaining data infrastructure that processes large-scale datasets efficiently and reliably. You’ll contribute to the design and implementation of high-volume pipelines, collaborate with engineers across teams, and help ensure our platform remains robust, scalable, and easy to use. This is a great role for someone with a strong data engineering background who’s ready to step into broader responsibilities and help shape the evolution of Segment’s platform. Responsibilities In this role, you will: Design and build the next generation of Warehouse Activation platform, process billions of events, and power various use cases for Twilio Segment customers. This encompasses working on stream data processing, storage, and other mission-critical systems. Ship features that opt for high availability and throughput with eventual consistency Collaborate with engineering and product leads, as well as teams across Twilio Segment Support the reliability and security of the platform Build and optimize globally available and highly scalable distributed systems Be able to act as a team Tech Lead as needed Mentor other engineers on the team in technical architecture and design Partner with application teams to deliver end to end customer success. Qualifications Twilio values diverse experiences from all kinds of industries, and we encourage everyone who meets the required qualifications to apply. If your career is just starting or hasn't followed a traditional path, don't let that stop you from considering Twilio. We are always looking for people who will bring something new to the table! Required 7+ years of industry experience in backend or data engineering roles. Strong programming skills in Scala, Java, or a similar language. Solid experience with Apache Spark or other distributed data processing frameworks. Experience with Trino, Snowflake, Delta Lake and comfortable working with ecommerce-scale datasets Working knowledge of batch and stream processing architectures. Experience designing, building, and maintaining ETL/ELT pipelines in production. Familiarity with AWS and tools like Parquet, Delta Lake, or Kafka. Comfortable operating in a CI/CD environment with infrastructure-as-code and observability tools. Strong collaboration and communication skills. Nice To Have Familiarity with GDPR, CCPA, or other data governance requirements. Experience with high-scale event processing or identity resolution. Exposure to multi-region, fault-tolerant distributed systems Location : This role will be remote and based in India(Karnataka, Maharashtra, New Delhi, Tamil nadu and Telangana) Travel We prioritize connection and opportunities to build relationships with our customers and each other. For this role, you may be required to travel occasionally to participate in project or team in-person meetings. What We Offer Working at Twilio offers many benefits, including competitive pay, generous time off, ample parental and wellness leave, healthcare, a retirement savings program, and much more. Offerings vary by location. Twilio thinks big. Do you? We like to solve problems, take initiative, pitch in when needed, and are always up for trying new things. That's why we seek out colleagues who embody our values — something we call Twilio Magic. Additionally, we empower employees to build positive change in their communities by supporting their volunteering and donation efforts. So, if you're ready to unleash your full potential, do your best work, and be the best version of yourself, apply now! If this role isn't what you're looking for, please consider other open positions. Twilio is proud to be an equal opportunity employer. We do not discriminate based upon race, religion, color, national origin, sex (including pregnancy, childbirth, reproductive health decisions, or related medical conditions), sexual orientation, gender identity, gender expression, age, status as a protected veteran, status as an individual with a disability, genetic information, political views or activity, or other applicable legally protected characteristics. We also consider qualified applicants with criminal histories, consistent with applicable federal, state and local law. Qualified applicants with arrest or conviction records will be considered for employment in accordance with the Los Angeles County Fair Chance Ordinance for Employers and the California Fair Chance Act. Additionally, Twilio participates in the E-Verify program in certain locations, as required by law.
Posted 1 week ago
5.0 years
0 Lacs
Pune, Maharashtra, India
On-site
The Applications Development Intermediate Programmer Analyst is an intermediate level position responsible for participation in the establishment and implementation of new or revised application systems and programs in coordination with the Technology team. The overall objective of this role is to contribute to applications systems analysis and programming activities. Responsibilities: Utilize knowledge of applications development procedures and concepts, and basic knowledge of other technical areas to identify and define necessary system enhancements, including using script tools and analyzing/interpreting code Consult with users, clients, and other technology groups on issues, and recommend programming solutions, install, and support customer exposure systems Apply fundamental knowledge of programming languages for design specifications. Analyze applications to identify vulnerabilities and security issues, as well as conduct testing and debugging Serve as advisor or coach to new or lower level analysts Identify problems, analyze information, and make evaluative judgements to recommend and implement solutions Resolve issues by identifying and selecting solutions through the applications of acquired technical experience and guided by precedents Has the ability to operate with a limited level of direct supervision. Can exercise independence of judgement and autonomy. Acts as SME to senior stakeholders and /or other team members. Appropriately assess risk when business decisions are made, demonstrating particular consideration for the firm's reputation and safeguarding Citigroup, its clients and assets, by driving compliance with applicable laws, rules and regulations, adhering to Policy, applying sound ethical judgment regarding personal behavior, conduct and business practices, and escalating, managing and reporting control issues with transparency. Qualifications: 5+ years of Proven experience in developing and managing Big data solutions using Apache Spark, Scala is must. Having strong hold on Spark-core, Spark-SQL & Spark Streaming Strong programming skills in Scala, Java, or Python. Hands on experience on Technologies like Apache Hive, Apache Kafka, HBase, Couchbase, Sqoop, Flume etc. Proficiency in SQL and experience with relational (Oracle/PL-SQL) . Experience in working on Kafka, JMS / MQ applications. Experience in working multiple OS (Unix, Linux, Win) Familiarity with data warehousing concepts and ETL processes. Experience in performance tuning of large technical solutions with significant volumes Knowledge of data modeling, data architecture, and data integration techniques. Knowledge on best practices for data security, privacy, and compliance. Experience with JAVA (Core Java, J2EE, Spring Boot Restful Services), Web services (REST, SOAP), XML, Java Script, Micro services, SOA etc. Strong technical knowledge of Apache Spark, Hive, SQL, and Hadoop ecosystem. Experience with developing frameworks and utility services including logging/monitoring. Experience delivering high quality software following continuous delivery and using code quality tools (JIRA, GitHub, Jenkin, Sonar, etc.). Experience creating large-scale, multi-tiered, distributed applications with Hadoop and Spark Profound knowledge implementing to different data storage solutions such as RDMBS(Oracle), Hive, HBase, Impala and NO SQL databases. Education: Bachelor’s degree/University degree or equivalent experience This job description provides a high-level review of the types of work performed. Other job-related duties may be assigned as required. ------------------------------------------------------ Job Family Group: Technology ------------------------------------------------------ Job Family: Applications Development ------------------------------------------------------ Time Type: Full time ------------------------------------------------------ Most Relevant Skills Please see the requirements listed above. ------------------------------------------------------ Other Relevant Skills For complementary skills, please see above and/or contact the recruiter. ------------------------------------------------------ Citi is an equal opportunity employer, and qualified candidates will receive consideration without regard to their race, color, religion, sex, sexual orientation, gender identity, national origin, disability, status as a protected veteran, or any other characteristic protected by law. If you are a person with a disability and need a reasonable accommodation to use our search tools and/or apply for a career opportunity review Accessibility at Citi. View Citi’s EEO Policy Statement and the Know Your Rights poster.
Posted 1 week ago
5.0 - 8.0 years
1 - 6 Lacs
Pune, Chennai, Bengaluru
Hybrid
Hello Connections , Exciting Opportunity Alert !! We're on the hunt for passionate individuals to join our dynamic team as Data Engineer Job Profile : Data Engineers Experience : Minimum 5 to Maximum 8 Yrs of exp Location : Chennai / Pune / Mumbai / Hyderabad / Bangalore Mandatory Skills : Big Data | Hadoop | SCALA | spark | sparkSql | Hive Qualification : B.TECH / B.E / MCA / Computer Science Background - Any Specification How to Apply? Send your CV to: sipriyar@sightspectrum.in Contact Number - 6383476138 Don't miss out on this amazing opportunity to accelerate your professional career! #bigdata #dataengineer #hadoop #spark #python #hive #pysaprk
Posted 1 week ago
10.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
TradeAgent Senior Java Engineer (Technology Products > Software Development > Senior Developer, Career Stage = Manager) Role Profile The successful candidate for the TradeAgent Senior Developer role, reporting to the TradeAgent Director of Technical Solutions & Delivery, will form part of a team building a complex, ground-up cloud-based critical market infrastructure service in a bold new venture for LSEG. This exciting opportunity requires a candidate who takes great pride in delivering excellence with excellent logical and technical skills and a can-do attitude combined with a helpful mentality, and a wish to play a critical role in forming and growing a new business. Key Responsibilities Build, deliver and maintain the multiple components of the Trade Agent platform ensuring timely delivery of work items. Resolve a high impact problem through in-depth evaluation of sophisticated architectures, business processes and industry standards. Serve as advisor to develop highly resilient and future proof solutions. Contribute to research and suggest for new projects for the Trade Agent platform. Willing to take new responsibilities based on projects need and circumstances. Contribute and guide the program initiatives in engineering excellence and learning and development. Ensure work is well documented and communicated with stakeholder expectations managed. Be challenging and questioning while ensuring trust and respect are maintained and a one team mentality is promoted. Key Skills And Experience Event driven microservices architecture looking for 10 years of have excellent understanding of microservices designs, its pitfalls, and best practices. You have knowledge of Domain Driven Design and event driven architecture. You have experience of working with containerised and orchestrated services using Docker and Kubernetes. You have experience of event-driven patterns that allow for an efficient and robust communication architecture. You have experience of building and maintain dev-ops pipelines for delivering applications ideally using GitLab. You have experience of using shift left testing principles and frameworks using technology such as Junit, Cucumber, Gherkin, Contract Test (PACT), TestContainers or other similar technology. You have working knowledge of using event and message brokers, such as Kafka and MQ. Advanced Java programming You have strong experience in Object Oriented Programming. You have a strong grasp of Java 17 and higher including advanced features and have used Spring Boot You have experience with developing REST services (REST design principals, Swagger / Open API, Spring REST MVC). You are proficiency in developing and delivering enterprise grade Java applications. You have experience of working with data structures, algorithms, concurrency & multi-threading. Database Management You have strong SQL knowledge and experience working with relational DBs, such as Postgres. You have a working knowledge of object storages such as AWS S3. You have knowledge of Database version control tools such as Flyway and Liquibase. Cloud Architecture You have worked on major public cloud preferably AWS. You have used cloud-based technology like AWS Aurora, MSK, S3, IAM You have a basic understanding of cloud networking. Agile ways of working You understand and believe in the ethos of agile working. You have experience of working in Scrum/Kanban model. You can participate and actively collaborate and contribute to sprint ceremonies including Product Backlog Refinement. You have the experience of collaborating with cross-functional teams in scaled agile setups. The following skills are a nice to have but not essential. You have good understanding of financial instruments (e.g., equities, bonds, derivatives). You have experience of writing applications using Scala. You have developed Web applications using ReactJS. Key Behaviours You have demonstrated a keen focus on delivery excellence, meeting commitments and managing your stakeholders’ expectations. You can demonstrate ability to take on responsibility and be accountable for it. You can work well within a team, are helpful and highly collaborative. You can work with the business, architecture, and delivery staff to understand their requirements in depth and translate that into robust timely delivered applications. You can be critical and challenging while maintain respect. You understand the importance of communication within a team and have championed it in the past. You have a desire to learn, improve and innovate. You have very high development standards, especially for code quality, code reviews, unit testing, continuous integration, and deployment. You can operate within a cross-functional team, working closely with a wide range of people from different disciplines. You are an engineer at heart that enjoys working with various technologies have an appetite for taking on challenges and maximising new technologies while minimising complexity. Diversity & Inclusion People are at the heart of what we do and drive the success of our business. Our colleagues thrive personally and professionally through our shared values of Integrity, Partnership, Innovation and Excellence are at the core of our culture. We embrace diversity and actively seek to attract people with unique backgrounds and perspectives. We are always looking at ways to become more agile so we meet the needs of our teams and customers. We believe that an inclusive collaborative workplace is pivotal to our success and supports the potential and growth of all colleagues at LSEG. About Us London Stock Exchange Group (LSE.L) is a diversified international market infrastructure and capital markets business sitting at the heart of the world's financial community. The Group can trace its history back to 1698. The Group operates a broad range of international equity, bond and derivatives markets, including London Stock Exchange; MTS, Europe's leading fixed income market; and Turquoise, a pan-European equities MTF. It is also home to one of the world’s leading growth markets for SMEs, AIM. Through its platforms, the Group offers international business and investors unrivalled access to Europe's capital markets. Post trade and risk management services are a significant part of the Group’s business operations. In addition to majority ownership of multi-asset global CCP operator, LCH Group, LSEG operates CC&G, the Italian clearing house; the T2S-ready European settlement business; and globeSettle, the Group’s newly established CSD based in Luxembourg. The Group is a global leader in indexing and analytic solutions. FTSE Russell offers thousands of indexes that measure and benchmark markets around the world. The Group also provides customers with an extensive range of real time and reference data products, including SEDOL, UnaVista, and RNS. London Stock Exchange Group is a leading developer of high performance trading platforms and capital markets software for customers around the world. In addition to the Group’s own markets, over 35 other organisations and exchanges use the Group’s MillenniumIT trading, surveillance and post trade technology. Headquartered in London, with significant operations in North America, Italy, France and Sri Lanka, the Group employs approximately 4,700 people. Values & Behaviours Integrity: My word is my bond. Integrity underpins all that we do – from unshakable commitment to building and supporting global markets based on transparency and trust, to every transaction across our business with each and every stakeholder. We are a source of enduring confidence in the financial system, so when we say that our work is our bond – we mean it. Partnership: We collaborate to succeed. We pride ourselves on working together as proactive partners, building positive relationships with our colleagues, customers, investors, regulators, governments and shareholders – for our mutual success and the benefit of all. Innovation: We nurture new ideas. We are ambitious and forward-looking – a pioneering Group of market innovators, driven by fresh thinking that has kept us ahead of change. We prudently and proactively invest to make sure that out markets and services constantly moving forward, developing and evolving with advances in technology. Excellence: We are committed to quality. We have a fundamental commitment to developing talented teams who deliver to the highest standards in all that we do. By collaborating together, we will sustain industry-leading levels of excellence, setting the benchmarks that inspire ever better performance. LSEG is a leading global financial markets infrastructure and data provider. Our purpose is driving financial stability, empowering economies and enabling customers to create sustainable growth. Our purpose is the foundation on which our culture is built. Our values of Integrity, Partnership , Excellence and Change underpin our purpose and set the standard for everything we do, every day. They go to the heart of who we are and guide our decision making and everyday actions. Working with us means that you will be part of a dynamic organisation of 25,000 people across 65 countries. However, we will value your individuality and enable you to bring your true self to work so you can help enrich our diverse workforce. You will be part of a collaborative and creative culture where we encourage new ideas and are committed to sustainability across our global business. You will experience the critical role we have in helping to re-engineer the financial ecosystem to support and drive sustainable economic growth. Together, we are aiming to achieve this growth by accelerating the just transition to net zero, enabling growth of the green economy and creating inclusive economic opportunity. LSEG offers a range of tailored benefits and support, including healthcare, retirement planning, paid volunteering days and wellbeing initiatives. We are proud to be an equal opportunities employer. This means that we do not discriminate on the basis of anyone’s race, religion, colour, national origin, gender, sexual orientation, gender identity, gender expression, age, marital status, veteran status, pregnancy or disability, or any other basis protected under applicable law. Conforming with applicable law, we can reasonably accommodate applicants' and employees' religious practices and beliefs, as well as mental health or physical disability needs. Please take a moment to read this privacy notice carefully, as it describes what personal information London Stock Exchange Group (LSEG) (we) may hold about you, what it’s used for, and how it’s obtained, your rights and how to contact us as a data subject. If you are submitting as a Recruitment Agency Partner, it is essential and your responsibility to ensure that candidates applying to LSEG are aware of this privacy notice.
Posted 1 week ago
10.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
About The Role We are looking for a Senior Data Engineer to lead the design and implementation of scalable data infrastructure and engineering practices. This role will be critical in laying down the architectural foundations for advanced analytics and AI/ML use cases across global business units. You’ll work closely with the Data Science Lead, Product Manager, and other cross-functional stakeholders to ensure data systems are robust, secure, and future-ready. Key Responsibilities Architect and implement end-to-end data infrastructure including ingestion, transformation, storage, and access layers to support enterprise-scale analytics and machine learning. Define and enforcedata engineering standards, design patterns, and best practices across the CoE. Lead theevaluation and selection of tools, frameworks, and platforms (cloud, open source, commercial) for scalable and secure data processing. Work with data scientists to enable efficient feature extraction, experimentation, and model deployment pipelines. Design forreal-time and batch processing architectures, including support for streaming data and event-driven workflows. Own thedata quality, lineage, and governance frameworks to ensure trust and traceability in data pipelines. Collaborate with central IT, data platform teams, and business units to align on data strategy, infrastructure, and integration patterns. Mentor and guide junior engineers as the team expands, creating a culture of high performance and engineering excellence. Qualifications 10+ years of hands-on experience in data engineering, data architecture, or platform development. Strong expertise inbuilding distributed data pipelines using tools like Spark, Kafka, Airflow, or equivalent orchestration frameworks. Deep understanding ofdata modeling, data lake/lakehouse architectures, and scalable data warehousing (e.g., Snowflake, BigQuery, Redshift). Advanced proficiency inPython and SQL, with working knowledge of Java or Scala preferred. Strong experience working oncloud-native data architectures (AWS, GCP, or Azure) including serverless, storage, and compute optimization. Proven experience in architectingML/AI-ready data environments, supporting MLOps pipelines and production-grade data flows. Familiarity withDevOps practices, CI/CD for data, and infrastructure-as-code (e.g., Terraform) is a plus. Excellent problem-solving skills and the ability to communicate technical solutions to non-technical stakeholders.
Posted 1 week ago
4.0 years
3 - 6 Lacs
Hyderābād
On-site
CDP ETL & Database Engineer The CDP ETL & Database Engineer will specialize in architecting, designing, and implementing solutions that are sustainable and scalable. The ideal candidate will understand CRM methodologies, with an analytical mindset, and a background in relational modeling in a Hybrid architecture. The candidate will help drive the business towards specific technical initiatives and will work closely with the Solutions Management, Delivery, and Product Engineering teams. The candidate will join a team of developers across the US, India & Costa Rica. Responsibilities: ETL Development – The CDP ETL C Database Engineer will be responsible for building pipelines to feed downstream data They will be able to analyze data, interpret business requirements, and establish relationships between data sets. The ideal candidate will be familiar with different encoding formats and file layouts such as JSON and XML. Implementations s Onboarding – Will work with the team to onboard new clients onto the ZMP/CDP+ The candidate will solidify business requirements, perform ETL file validation, establish users, perform complex aggregations, and syndicate data across platforms. The hands-on engineer will take a test-driven approach towards development and will be able to document processes and workflows. Incremental Change Requests – The CDP ETL C Database Engineer will be responsible for analyzing change requests and determining the best approach towards implementation and execution of the This requires the engineer to have a deep understanding of the platform's overall architecture. Change requests will be implemented and tested in a development environment to ensure their introduction will not negatively impact downstream processes. Change Data Management – The candidate will adhere to change data management procedures and actively participate in CAB meetings where change requests will be presented and Prior to introducing change, the engineer will ensure that processes are running in a development environment. The engineer will be asked to do peer-to-peer code reviews and solution reviews before production code deployment. Collaboration s Process Improvement – The engineer will be asked to participate in knowledge share sessions where they will engage with peers, discuss solutions, best practices, overall approach, and The candidate will be able to look for opportunities to streamline processes with an eye towards building a repeatable model to reduce implementation duration. Job Requirements: The CDP ETL C Database Engineer will be well versed in the following areas: Relational data modeling ETL and FTP concepts Advanced Analytics using SQL Functions Cloud technologies - AWS, Snowflake Able to decipher requirements, provide recommendations, and implement solutions within predefined The ability to work independently, but at the same time, the individual will be called upon to contribute in a team setting. The engineer will be able to confidently communicate status, raise exceptions, and voice concerns to their direct manager. Participate in internal client project status meetings with the Solution/Delivery management When required, collaborate with the Business Solutions Analyst (BSA) to solidify. Ability to work in a fast paced, agile environment; the individual will be able to work with a sense of urgency when escalated issues arise. Strong communication and interpersonal skills, ability to multitask and prioritize workload based on client demand. Familiarity with Jira for workflow , and time allocation. Familiarity with Scrum framework, backlog, planning, sprints, story points, retrospectives. Required Skills: ETL – ETL tools such as Talend (Preferred, not required) DMExpress – Nice to have Informatica – Nice to have Database - Hands on experience with the following database Technologies Snowflake (Required) MYSQL/PostgreSQL – Nice to have Familiar with NOSQL DB methodologies (Nice to have) Programming Languages – Can demonstrate knowledge of any of the PLSQL JavaScript Strong Plus Python - Strong Plus Scala - Nice to have AWS – Knowledge of the following AWS services: S3 EMR (Concepts) EC2 (Concepts) Systems Manager / Parameter Store Understands JSON Data structures, key value Working knowledge of Code Repositories such as GIT, Win CVS, Workflow management tools such as Apache Airflow, Kafka, Automic/Appworx Jira. Minimum Qualifications: Bachelor's degree or equivalent 4+ Years' experience Excellent verbal C written communications skills Self-Starter, highly motivated Analytical mindset Company Summary: Zeta Global is a NYSE listed data-powered marketing technology company with a heritage of innovation and industry leadership. Founded in 2007 by entrepreneur David A. Steinberg and John Sculley, former CEO of Apple Inc and Pepsi-Cola, the Company combines the industry's 3rd largest proprietary data set (2.4B+ identities) with Artificial Intelligence to unlock consumer intent, personalize experiences and help our clients drive business growth. Our technology runs on the Zeta Marketing Platform, which powers 'end to end' marketing programs for some of the world's leading brands. With expertise encompassing all digital marketing channels – Email, Display, Social, Search and Mobile – Zeta orchestrates acquisition and engagement programs that deliver results that are scalable, repeatable and sustainable. Zeta Global is an Equal Opportunity/Affirmative Action employer and does not discriminate on the basis of race, gender, ancestry, color, religion, sex, age, marital status, sexual orientation, gender identity, national origin, medical condition, disability, veterans status, or any other basis protected by law. Zeta Global Recognized in Enterprise Marketing Software and Cross-Channel Campaign Management Reports by Independent Research Firm https://www.forbes.com/sites/shelleykohan/2024/06/1G/amazon-partners-with-zeta-global-to-deliver- gen-ai-marketing-automation/ https://www.cnbc.com/video/2024/05/06/zeta-global-ceo-david-steinberg-talks-ai-in-focus-at-milken- conference.html https://www.businesswire.com/news/home/20240G04622808/en/Zeta-Increases-3Q%E2%80%GG24- Guidance https://www.prnewswire.com/news-releases/zeta-global-opens-ai-data-labs-in-san-francisco-and-nyc- 300S45353.html https://www.prnewswire.com/news-releases/zeta-global-recognized-in-enterprise-marketing-software-and- cross-channel-campaign-management-reports-by-independent-research-firm-300S38241.html
Posted 1 week ago
8.0 years
3 - 7 Lacs
Gurgaon
On-site
Optum is a global organization that delivers care, aided by technology to help millions of people live healthier lives. The work you do with our team will directly improve health outcomes by connecting people with the care, pharmacy benefits, data and resources they need to feel their best. Here, you will find a culture guided by inclusion, talented peers, comprehensive benefits and career development opportunities. Come make an impact on the communities we serve as you help us advance health optimization on a global scale. Join us to start Caring. Connecting. Growing together. Primary Responsibilities: Design and develop scalable systems for processing unstructured data into actionable insights using Python, Flask, and Azure Cognitive Services Integrate Optical Character Recognition (OCR), Speech-to-Text, and NLP models into workflows to handle various file formats such as PDFs, images, audio files, and text documents Implement robust error-handling mechanisms, multithreaded architectures, and RESTful APIs to ensure seamless user experiences. Utilize Azure OpenAI, Azure Speech SDK, and Azure Form Recognizer to create AI-powered solutions tailored to meet complex business requirements Collaborate with cross-functional teams to drive innovation and implement analytics workflows and ML models to enhance business processes and decision-making Ensure the accuracy, efficiency, and scalability of systems focusing on healthcare claims processing, document digitization, and data extraction Comply with the terms and conditions of the employment contract, company policies and procedures, and any and all directives (such as, but not limited to, transfer and/or re-assignment to different work locations, change in teams and/or work shifts, policies in regards to flexibility of work benefits and/or work environment, alternative work arrangements, and other decisions that may arise due to the changing business environment). The Company may adopt, vary or rescind these policies and directives in its absolute discretion and without any limitation (implied or otherwise) on its ability to do so Required Qualifications: 8+ years of relevant experience in AI/ML engineering and cognitive automation Proven experience as an AI/ML Engineer, Software Engineer, Data Analyst, or a similar role in the tech industry Extensive experience with Azure Cognitive Services and other AI technologies SQL, Python, PySpark, Scala experience Proficient in developing and deploying machine learning models and handling large data sets Proven solid programming skills in Python and familiarity with Flask web framework Proven excellent problem-solving skills and the ability to work in a fast-paced environment Proven solid communication and collaboration skills, capable of working effectively with cross-functional teams. Demonstrated ability to implement robust ETL or ELT workflows for structured and unstructured data ingestion, transformation, and storage Preferred Qualification: Experience in healthcare industries Skills: Python Programming and SQL Data Analytics and Machine Learning Classification and Unsupervised Learning Regression and NLP Cloud and DevOps Foundations Data Visualization and Reporting At UnitedHealth Group, our mission is to help people live healthier lives and make the health system work better for everyone. We believe everyone–of every race, gender, sexuality, age, location and income–deserves the opportunity to live their healthiest life. Today, however, there are still far too many barriers to good health which are disproportionately experienced by people of color, historically marginalized groups and those with lower incomes. We are committed to mitigating our impact on the environment and enabling and delivering equitable care that addresses health disparities and improves health outcomes — an enterprise priority reflected in our mission.
Posted 1 week ago
8.0 years
30 - 38 Lacs
Gurgaon
Remote
Role: AWS Data Engineer Location: Gurugram Mode: Hybrid Type: Permanent Job Description: We are seeking a talented and motivated Data Engineer with requisite years of hands-on experience to join our growing data team. The ideal candidate will have experience working with large datasets, building data pipelines, and utilizing AWS public cloud services to support the design, development, and maintenance of scalable data architectures. This is an excellent opportunity for individuals who are passionate about data engineering and cloud technologies and want to make an impact in a dynamic and innovative environment. Key Responsibilities: Data Pipeline Development: Design, develop, and optimize end-to-end data pipelines for extracting, transforming, and loading (ETL) large volumes of data from diverse sources into data warehouses or lakes. Cloud Infrastructure Management: Implement and manage data processing and storage solutions in AWS (Amazon Web Services) using services like S3, Redshift, Lambda, Glue, Kinesis, and others. Data Modeling: Collaborate with data scientists, analysts, and business stakeholders to define data requirements and design optimal data models for reporting and analysis. Performance Tuning & Optimization: Identify bottlenecks and optimize query performance, pipeline processes, and cloud resources to ensure cost-effective and scalable data workflows. Automation & Scripting: Develop automated data workflows and scripts to improve operational efficiency using Python, SQL, or other scripting languages. Collaboration & Documentation: Work closely with data analysts, data scientists, and other engineering teams to ensure data availability, integrity, and quality. Document processes, architectures, and solutions clearly. Data Quality & Governance: Ensure the accuracy, consistency, and completeness of data. Implement and maintain data governance policies to ensure compliance and security standards are met. Troubleshooting & Support: Provide ongoing support for data pipelines and troubleshoot issues related to data integration, performance, and system reliability. Qualifications: Essential Skills: Experience: 8+ years of professional experience as a Data Engineer, with a strong background in building and optimizing data pipelines and working with large-scale datasets. AWS Experience: Hands-on experience with AWS cloud services, particularly S3, Lambda, Glue, Redshift, RDS, and EC2. ETL Processes: Strong understanding of ETL concepts, tools, and frameworks. Experience with data integration, cleansing, and transformation. Programming Languages: Proficiency in Python, SQL, and other scripting languages (e.g., Bash, Scala, Java). Data Warehousing: Experience with relational and non-relational databases, including data warehousing solutions like AWS Redshift, Snowflake, or similar platforms. Data Modeling: Experience in designing data models, schema design, and data architecture for analytical systems. Version Control & CI/CD: Familiarity with version control tools (e.g., Git) and CI/CD pipelines. Problem-Solving: Strong troubleshooting skills, with an ability to optimize performance and resolve technical issues across the data pipeline. Desirable Skills: Big Data Technologies: Experience with Hadoop, Spark, or other big data technologies. Containerization & Orchestration: Knowledge of Docker, Kubernetes, or similar containerization/orchestration technologies. Data Security: Experience implementing security best practices in the cloud and managing data privacy requirements. Data Streaming: Familiarity with data streaming technologies such as AWS Kinesis or Apache Kafka. Business Intelligence Tools: Experience with BI tools (Tableau, Quicksight) for visualization and reporting. Agile Methodology: Familiarity with Agile development practices and tools (Jira, Trello, etc.) Job Type: Permanent Pay: ₹3,000,000.00 - ₹3,800,000.00 per year Benefits: Work from home Schedule: Day shift Monday to Friday Experience: Data Engineering: 6 years (Required) AWS Elastic MapReduce (EMR): 3 years (Required) AWS: 4 years (Required) Work Location: In person
Posted 1 week ago
15.0 years
2 - 5 Lacs
Chennai
Remote
ABOUT US Cognizant is one of the world's leading professional services companies, transforming clients' business, operating, and technology models for the digital era. Our unique industry-based, consultative approach helps clients envision, build, and run more innovative and efficient businesses. Headquartered in the U.S., Cognizant, a member of the NASDAQ-100, is consistently listed among the most admired companies in the world. Learn how Cognizant helps clients lead with digital at www.cognizant.com. LEADING AT COGNIZANT This is a Leadership role at Cognizant. We believe how you lead is as important as what you deliver. Cognizant leaders at every level: Drive our business strategy and inspire teams around our future. Live the leadership behaviors, leading themselves, others and the business. Uphold our Values, role modeling them in every action and decision. Nurture our people and culture, creating a workplace where all can thrive. At Cognizant, leadership transcends titles and is embodied in actions and behaviors. We empower our leaders at every level to drive business strategy, inspire teams, uphold our values, and foster an inclusive culture. We invite you to see how you can contribute to our story. ROLE SUMMARY: Solutioning lead for Data Engineering – Azure and Databricks as primary stack ROLE RESPONSIBILITIES: Architecture and Solutioning on Azure, databricks data platforms , expertise on architecture patterns - data warehosue, lakehouse, data fabric and datamesh Sizing ,Estimation and Implementation plan for solutioning Solution Prototyping, Advisory and orchestrating in-person/remote workshops Work with hyperscalers and platform vendors to understand and test platform roadmaps and develop joint solutions Own end-to-end solutions working across various teams in Cognizant - Sales, Delivery and Global solutioning Own key accounts as Architecture advisory and establish deep client relationships Contribute to practice by developing reusable assets and solutions JOB REQUIREMENTS Bachelor’s or Master’s degree in computer science, engineering, information systems or a related field Minimum 15 years’ experience as Solution Architect designing and developing data architecture patterns Minimum 5-year hands-on experience in building Databricks based solutions Minimum 3 years’ experience as Solution Architect in pre-sales team driving the sales process from a technical solution standpoint Excellent verbal and written communication skills with ability to present complex Cloud Data Architecture solutions concepts to technical and executive audience (leveraging PPTs, Demos and Whiteboard) Deep expertise in designing Azure and Databricks Strong expertise in handling large and complex RFPs/RFIs and collaborating with multiple service lines & platform vendors in a fast-paced environment Strong relationship building skills and ability to provide technical advisory and guidance Technology architecture & implementation experience with deep implementation experience with Data solutions 15~20 years of experience in Data Engineering and 5+ Years Data Engineering Experience on cloud data engineering Technology pre sales experience – Architecture, Effort sizing , Estimation and Solution defense Data architecture patterns– Data Warehouse , Data Lake , Data Mesh , Lake house , Data as a product Develop or Co-develop proofs of concept and prototypes with customer teams Excellent understanding of distributed computing fundamentals Experience working with one or more major cloud vendors Deep expertise on End to End Pipeline ( or ETL) development following best practices and including orchestration, Optimization of Data pipelines Strong understanding of the full CI/CD lifecycle Large legacy migration ( Hadoop , Terdata like) experience to Cloud Data platforms Expert level proficiency in engineering & optimizing with various data engineering ingestion patterns - Batch, Micro Batch, Streaming and API Understand imperatives of change data capture with tools & best practices POV Architect and Solution Data Governance capability pillars supporting modern data eco system Data services and various consumption archetypes including semantic layers, BI tools and AI&ML Thought leadership designing self-service data engineering platforms & solutions Core Platform – Databricks Ability to engage and offer differing points of view to customers architecture using Databricks platform Strong understanding of the Lake house Architecture Implementation expertise using Delta Lake Security design and implementation on Databricks Scala or pipelines development in multi-hop pipeline architecture Architecture and Implementation experience with Spark and Delta Lake performance tuning including topics such as cluster sizing Preferred skills : Gen-AI architecture patterns Data Quality and Data Governance Cloud Cost Monitoring and Optimization
Posted 1 week ago
8.0 years
6 - 9 Lacs
Chennai
On-site
Job Description: About Us At Bank of America, we are guided by a common purpose to help make financial lives better through the power of every connection. Responsible Growth is how we run our company and how we deliver for our clients, teammates, communities, and shareholders every day. One of the keys to driving Responsible Growth is being a great place to work for our teammates around the world. We’re devoted to being a diverse and inclusive workplace for everyone. We hire individuals with a broad range of backgrounds and experiences and invest heavily in our teammates and their families by offering competitive benefits to support their physical, emotional, and financial well-being. Bank of America believes both in the importance of working together and offering flexibility to our employees. We use a multi-faceted approach for flexibility, depending on the various roles in our organization. Working at Bank of America will give you a great career with opportunities to learn, grow and make an impact, along with the power to make a difference. Join us! Global Business Services Global Business Services delivers Technology and Operations capabilities to Lines of Business and Staff Support Functions of Bank of America through a centrally managed, globally integrated delivery model and globally resilient operations. Global Business Services is recognized for flawless execution, sound risk management, operational resiliency, operational excellence, and innovation. In India, we are present in five locations and operate as BA Continuum India Private Limited (BACI), a non-banking subsidiary of Bank of America Corporation and the operating company for India operations of Global Business Services. Process Overview* Data, Analytics & Insights Technology (DAIT) provides customer, client, and operational data in support of Consumer, Business, Wealth, and Payments Technology with responsibility for a number of key data technologies. These include 16 Authorized Data Sources (ADS), marketing and insights platforms, advanced analytics Platforms, core client data and more. DAIT drives these capabilities with the goal of maximizing data assets to serve bank operations, meet regulatory requirements and personalize interactions with our customers across all channels. GBDART, a sub-function of DAIT, is the Bank’s strategic initiative to modernize data architecture and enable cloud-based, connected data experiences for analytics and insights across commercial banking. Job Description* The candidate must be Strong Java, J2EE experience with Micro Services Development on Cloud and On Prem Environments. Experience to work on large scale enterprise applications using Java, Web services, Streaming, Realtime to work on large and complex systems in all phases of SDLC. Good experience on creating and deploying services using scala and Java with CI/CD implementation. Good to have experience on Graph database with data modeling routines using RDF. Responsibilities* Understand the business requirement and perform gap analysis. Strong experience on Micro Services using Java, Containers. Provide technical solution and develop SOAP/REST Web service using Java, Spring. Debugging of Code effective log management. Prepare unit test case and run through Junit. Deploy and manage code through CI/CD pipelines. Refractor existing code to enhance readability, performance, and general structure. Strong SQL databases experience and write complex SQL queries. Provide assistance to testing team, where necessary to aid in testing and test case creation. Provide guidance to team developers with design, implementation, and completion. Follow the agile methodology. Work with onsite team to determine needs and apply/customize existing technology to meet those requirements. Maintain existing software systems by identifying and correcting software defects. Maintain and support multiple projects and deadlines. Document and report application specifics. Create technical specifications and test plans. Provide weekend on call support during application releases. Requirements* Education* Certifications If Any: NA. Experience Range* 08 Years To 15 Years. Foundational Skills* 8-15 years Java development experience. Must have experience to drive the project Technically. Strong experience on Java, Spring; Web Services - SOAP/REST. Good to have experience on graph database, rdf. Exp with Application Servers such as WebLogic, JMS, EJB. JDBC / SQL Programming. SBT, Autosys, Bitbucket, Jenkins. Must be detailed oriented and a quick learner. Have strong communication skills both verbal and written. Able to work independently as well as with teams in a proactive manor. Desired Skills* Adoptability to quickly learn and deliver on internal frameworks. Ability to work on multiple projects and be flexible to adapt to changing requirements. Willingness to embrace and learn new technologies. Must be an effective communicator. Work Timings* General Shift (11:00 AM to 8:00 PM). Job Location* Chennai, GIFT.
Posted 1 week ago
7.0 years
5 - 6 Lacs
Chennai
On-site
7+ years of experience in Big Data with strong expertise in Spark and Scala Mandatory Skills: Big Data Primarily Spark and Scala Strong Knowledge in HDFS, Hive, Impala with knowledge on Unix , Oracle, Autosys, Good to Have : Agile Methodology and Banking Expertise Strong Communication Skills Not limited to Spark batch, need Spark streaming experience No SQL DB Experience : HBase/Mongo/Couchbase About Virtusa Teamwork, quality of life, professional and personal development: values that Virtusa is proud to embody. When you join us, you join a team of 27,000 people globally that cares about your growth — one that seeks to provide you with exciting projects, opportunities and work with state of the art technologies throughout your career with us. Great minds, great potential: it all comes together at Virtusa. We value collaboration and the team environment of our company, and seek to provide great minds with a dynamic place to nurture new ideas and foster excellence. Virtusa was founded on principles of equal opportunity for all, and so does not discriminate on the basis of race, religion, color, sex, gender identity, sexual orientation, age, non-disqualifying physical or mental disability, national origin, veteran status or any other basis covered by appropriate law. All employment is decided on the basis of qualifications, merit, and business need.
Posted 1 week ago
0 years
2 - 3 Lacs
Chennai
On-site
Responsible for designing, developing, and optimizing data processing solutions using a combination of Big Data technologies. Focus on building scalable and efficient data pipelines for handling large datasets and enabling batch & real-time data streaming and processing. Responsibilities: > Develop Spark applications using Scala or Python (Pyspark) for data transformation, aggregation, and analysis. > Develop and maintain Kafka-based data pipelines: This includes designing Kafka Streams, setting up Kafka Clusters, and ensuring efficient data flow. > Create and optimize Spark applications using Scala and PySpark: They leverage these languages to process large datasets and implement data transformations and aggregations. > Integrate Kafka with Spark for real-time processing: They build systems that ingest real-time data from Kafka and process it using Spark Streaming or Structured Streaming. > Collaborate with data teams: This includes data engineers, data scientists, and DevOps, to design and implement data solutions. > Tune and optimize Spark and Kafka clusters: Ensuring high performance, scalability, and efficiency of data processing workflows. > Write clean, functional, and optimized code: Adhering to coding standards and best practices. > Troubleshoot and resolve issues: Identifying and addressing any problems related to Kafka and Spark applications. > Maintain documentation: Creating and maintaining documentation for Kafka configurations, Spark jobs, and other processes. > Stay updated on technology trends: Continuously learning and applying new advancements in functional programming, big data, and related technologies. Proficiency in: Hadoop ecosystem big data tech stack(HDFS, YARN, MapReduce, Hive, Impala). Spark (Scala, Python) for data processing and analysis. Kafka for real-time data ingestion and processing. ETL processes and data ingestion tools Deep hands-on expertise in Pyspark, Scala, Kafka Programming Languages: Scala, Python, or Java for developing Spark applications. SQL for data querying and analysis. Other Skills: Data warehousing concepts. Linux/Unix operating systems. Problem-solving and analytical skills. Version control systems - Job Family Group: Technology - Job Family: Applications Development - Time Type: Full time - Most Relevant Skills Please see the requirements listed above. - Other Relevant Skills For complementary skills, please see above and/or contact the recruiter. - Citi is an equal opportunity employer, and qualified candidates will receive consideration without regard to their race, color, religion, sex, sexual orientation, gender identity, national origin, disability, status as a protected veteran, or any other characteristic protected by law. If you are a person with a disability and need a reasonable accommodation to use our search tools and/or apply for a career opportunity review Accessibility at Citi . View Citi’s EEO Policy Statement and the Know Your Rights poster.
Posted 1 week ago
0 years
6 - 17 Lacs
India
Remote
A Big Data/Hadoop Developer designs, develops, and maintains Hadoop-based solutions. This role involves working with the Hadoop ecosystem to build data processing pipelines, write MapReduce jobs, and integrate Hadoop with other systems. They collaborate with data scientists and analysts to gather requirements and deliver insights, ensuring efficient data ingestion, transformation, and storage. Key Responsibilities: Design and Development: Creating, implementing, and maintaining Hadoop applications, including designing data processing pipelines, writing MapReduce jobs, and developing efficient data ingestion and transformation processes. Hadoop Ecosystem Expertise: Having a strong understanding of the Hadoop ecosystem, including components like HDFS, MapReduce, Hive, Pig, HBase, and tools like Flume, Zookeeper, and Oozie. Data Analysis and Insights: Analyzing large datasets stored in Hadoop to uncover valuable insights and generate reports. Collaboration: Working closely with data scientists, analysts, and other stakeholders to understand requirements and deliver effective solutions. Performance Optimization: Optimizing the performance of Hadoop jobs and data processing pipelines, ensuring efficient resource utilization. Data Security and Privacy: Maintaining data security and privacy within the Hadoop environment. Documentation and Best Practices: Creating and maintaining documentation for Hadoop development, including best practices and standards. Skills and Qualifications: Strong Programming Skills: Proficient in languages like Java, Python, or Scala, and experience with MapReduce programming. Hadoop Framework Knowledge: Deep understanding of the Hadoop ecosystem and its core components. Data Processing Tools: Experience with tools like Hive, Pig, HBase, Spark, and Kafka. Data Modeling and Analysis: Familiarity with data modeling techniques and experience in analyzing large datasets. Problem-Solving and Analytical Skills: Ability to troubleshoot issues, optimize performance, and derive insights from data. Communication and Collaboration: Effective communication skills to collaborate with diverse teams and stakeholders. Linux Proficiency: Familiarity with Linux operating systems and basic command-line operations. Tamil candidates only Job Type: Full-time Pay: ₹633,061.90 - ₹1,718,086.36 per year Benefits: Food provided Work from home Work Location: In person
Posted 1 week ago
7.0 years
0 Lacs
Chennai
On-site
7+ years of experience in Big Data with strong expertise in Spark and Scala Mandatory Skills: Big Data Primarily Spark and Scala Strong Knowledge in HDFS, Hive, Impala with knowledge on Unix , Oracle, Autosys, Good to Have : Agile Methodology and Banking Expertise Strong Communication Skills Not limited to Spark batch, need Spark streaming experience No SQL DB Experience : HBase/Mongo/Couchbase About Virtusa Teamwork, quality of life, professional and personal development: values that Virtusa is proud to embody. When you join us, you join a team of 27,000 people globally that cares about your growth — one that seeks to provide you with exciting projects, opportunities and work with state of the art technologies throughout your career with us. Great minds, great potential: it all comes together at Virtusa. We value collaboration and the team environment of our company, and seek to provide great minds with a dynamic place to nurture new ideas and foster excellence. Virtusa was founded on principles of equal opportunity for all, and so does not discriminate on the basis of race, religion, color, sex, gender identity, sexual orientation, age, non-disqualifying physical or mental disability, national origin, veteran status or any other basis covered by appropriate law. All employment is decided on the basis of qualifications, merit, and business need.
Posted 1 week ago
5.0 - 7.0 years
0 Lacs
Andhra Pradesh
On-site
Job description / Responsibilities - 5-7 years of experience in Big Data stacks: Spark/Scala/Hive/Impala/Hadoop Strong Expertise in Scala The resource should have good hands-on experience in Scala programming language . Should be able to model the given problem statement using Object Oriented programming concepts. Should have the basic understanding of the Spark in-memory processing framework and the concept of map tasks and reduce tasks. Should have hands-on experience on data processing projects. Should be able to frame sqls and analyze data based on the given requirements Advanced SQL knowledge Git hub or bit bucket Primary Skill Spark Scala. The resource should have good hands-on experience in Scala programming language Secondary Skill SQL, Python, Hive, Impala, AWS About Virtusa Teamwork, quality of life, professional and personal development: values that Virtusa is proud to embody. When you join us, you join a team of 27,000 people globally that cares about your growth — one that seeks to provide you with exciting projects, opportunities and work with state of the art technologies throughout your career with us. Great minds, great potential: it all comes together at Virtusa. We value collaboration and the team environment of our company, and seek to provide great minds with a dynamic place to nurture new ideas and foster excellence. Virtusa was founded on principles of equal opportunity for all, and so does not discriminate on the basis of race, religion, color, sex, gender identity, sexual orientation, age, non-disqualifying physical or mental disability, national origin, veteran status or any other basis covered by appropriate law. All employment is decided on the basis of qualifications, merit, and business need.
Posted 1 week ago
5.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
At Deliveroo, it is our mission to build the definitive food company. In order to do that, we’re building a company that is secure and protects the data and money of our customers, employees and investors. As a Senior Software Engineer within the Security Product Engineering team, you will be responsible for the design, development and support of security products. To give you a flavour of the types of products we are working on, this includes custom-developed just in time access tooling (PAM), authentication solutions that support millions of active monthly users, as well as a number of other exciting products we have in the pipeline. Working with colleagues across the wider security team and business, you will have the opportunity to develop a roadmap of products in alignment with achieving our maturity goals. Reporting to the Head of Security Architecture & Engineering, this is a hands-on software engineering role. You will directly influence the security posture of many projects across the company to ensure that security plays an important part in everything we do. Role Location: Hyderabad. What You'll Do Provide software engineering expertise to the team in our golden path languages, provide peer review and support to more junior engineers within the team. Work with cloud platforms (e.g., AWS, Azure, GCP) and with CI/CD pipelines, containerisation (Docker, Kubernetes), and infrastructure-as-code concepts. Design, develop, and implement scalable, reliable, and maintainable software systems, features, and APIs. Contribute significantly to architectural decisions and technical strategy. Demonstrate a product management mindset- always prioritising the maximum value add in terms of security risk reduction. Proactively identify and advocate for improvements in development processes, tooling, infrastructure, and operational efficiency. Requirements Bachelor's degree in computer science or equivalent practical experience. 5+ years of experience building and deploying Cloud workloads. 8+ years experience in programming with Java, Golang or Scala building tools and re-factoring code. Broad knowledge of the security technologies and capabilities used in an enterprise in a high growth, cloud-based environment. Experience with AWS/GCP and containerised environments and ensuring that security architecture and engineering aligns to that model. Knowledge of at least one security domain (such as identity and access management, application security, data security, cloud security). Hands-on and technical background with an understanding of enterprise systems, the security threats they face, and how to remediate them. Preferred, But Not Required Experience in AWS IAM Experience with SAML and SSO Javascript/Typescript experience Docker Knowledge of security compliance standards and regulations including the GDPR/Data protection. Why Deliveroo? Our mission is to transform the way you shop and eat, bringing the neighbourhood to your door by connecting consumers, restaurants, shops and riders. We are transforming the way the world eats and shops by making access to food and products more convenient and enjoyable. We are a technology-driven company at the forefront of the most rapidly expanding industry in the world. We are still a small team, making a very large impact, looking to answer some of the most interesting questions out there. We move fast, value autonomy and we are always looking for new ideas Workplace & Diversity At Deliveroo we know that people are the heart of the business and we prioritise their welfare. We offer multiple great benefits in areas including health, family, finance, community, convenience, growth, time away and relocation. We believe a great workplace is one that represents the world we live in and how beautifully diverse it can be. That means we have no judgement when it comes to any one of the things that make you who you are - your gender, race, sexuality, religion or a secret aversion to coriander. All you need is a passion for (most) food and a desire to be part of one of the fastest growing startups in an incredibly exciting space.
Posted 1 week ago
0 years
0 Lacs
Andhra Pradesh, India
On-site
We are seeking a highly skilled and motivated Big Data Engineer to join our data engineering team. The ideal candidate will have hands-on experience with Hadoop ecosystem, Apache Spark, and programming expertise in Python (PySpark), Scala, and Java. You will be responsible for designing, developing, and optimizing scalable data pipelines and big data solutions to support analytics and business intelligence initiatives.
Posted 1 week ago
7.0 years
0 Lacs
Gurugram, Haryana, India
On-site
Department: Engineering Employment Type: Full Time Location: India Description Shape the Future of Work with Eptura At Eptura, we're not just another tech company—we're a global leader transforming the way people, workplaces, and assets connect. Our innovative worktech solutions empower 25 million users across 115 countries to thrive in a digitally connected world. Trusted by 45% of Fortune 500 companies, we're redefining workplace innovation and driving success for organizations around the globe. Job Description We are seeking a Data Lead – Data Engineering to spearhead the design, development, and optimization of complex data pipelines and ETL processes . This role requires deep expertise in data modeling, cloud platforms, and automation to ensure high-quality, scalable solutions . You will collaborate closely with stakeholders, engineers, and business teams to drive data-driven decision-making across our organization. Responsibilities Work with stakeholders to understand data requirements and architect end-to-end ETL solutions. Design and maintain data models, including schema design and optimization. Develop and automate data pipelines to ensure quality, consistency, and efficiency. Lead the architecture and delivery of key modules within data platforms. Build and refine complex data models in Power BI, simplifying data structures with dimensions and hierarchies. Write clean, scalable code using Python, Scala, and PySpark (must-have skills). Test, deploy, and continuously optimize applications and systems. Lead, mentor, and develop a high-performing data engineering team, fostering a culture of collaboration, innovation, and continuous improvement while ensuring alignment with business objectives Mentor team members and participate in engineering hackathons to drive innovation. About You 7+ years of experience in Data Engineering, with at least 2 years in a leadership role. Strong expertise in Python, PySpark, and SQL for data processing and transformation. Hands-on experience with Azure cloud computing, including Azure Data Factory and Databricks. Proficiency in Analytics/Visualization tools: Power BI, Looker, Tableau, IBM Cognos. Strong understanding of data modeling, including dimensions and hierarchy structures. Experience working with Agile methodologies and DevOps practices (GitLab, GitHub). Excellent communication and problem-solving skills in cross-functional environments. Ability to reduce added cost, complexity, and security risks with scalable analytics solutions. Nice To Have Experience working with NoSQL databases (Cosmos DB, MongoDB). Familiarity with AutoCAD and building systems for advanced data visualization. Knowledge of identity and security protocols, such as SAML, SCIM, and FedRAMP compliance. Benefits Health insurance fully paid–Spouse, children, and Parents Accident insurance fully paid Flexible working allowance 25 days holidays 7 paid sick days 10 public holidays Employee Assistance Program Eptura Information Follow us on Twitter | LinkedIn | Facebook | YouTube Eptura is an Equal Opportunity Employer. At Eptura we promote our flexible workspace environment, free from discrimination. We believe that diversity of experience, perspective, and background leads to a better environment for all our people and a better product for our customers. Everyone is welcome at Eptura, no matter where you are from, and the more diverse we are, the more unified we will be in ensuring respectful connections all around the world.
Posted 1 week ago
5.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Role Title: Director - AI Engineering Lead Location: Chennai We are one purpose-led global organisation. The enablers and innovators, ensuring that we can fulfil our mission to push the boundaries of science and discover and develop life-changing medicines. We take pride in working close to the cause, opening the locks to save lives, ultimately making a massive difference to the outside world. AstraZeneca (AZ) is in a period of strong growth and our employees have a united purpose to make a difference to patients around the world who need both our medicines and the ongoing developments from our science. In this journey AZ must continue to work across borders and with partners and new colleagues in a fast and seamless way. The ambition, size and complexity of the organisation, coupled with the opportunities afforded by new technology, has led the Board to approve a large-scale transformation programme – Axial. The Axial Programme will be powered by S/4HANA a new ERP (Enterprise Resource Planning) system which will be implemented right across the organisation and will provide our business with standardised processes, enhanced financial management, common data and real time reporting, transforming the way we work through our entire supply chain - from bench to patient. The new system will be used by more than 20,000 employees daily, is foundational to all AZ entities and is central to most core business processes. This is a once in a generation programme for AstraZeneca and will shape our ways of working globally for many years to come. The Axial programme needs the best talent to work in it. Whether it’s the technical skills, business understanding or change leadership, we want to ensure we have the strongest team deployed throughout. We are aiming to deliver a world class change programme that leaves all employees with a fuller understanding of their role in the end-to-end nature of our global company. This programme will provide AZ with a competitive edge, to the benefit of our employees, customers and patients. The AI Engineering Lead will be responsible for leading the development and implementation of AI and machine learning solutions that drive business value. This role requires a strategic thinker with a strong technical background in AI technologies, data analytics, and a proven ability to manage cross-functional teams. The ideal candidate will have a strong background in AI, machine learning, and natural language processing, coupled with hands-on experience in Python programming. What You’ll Do Lead a focused team to explore and harness the Generative AI capabilities. Design and execute Proof of Concepts (PoCs) to validate AI use cases. Develop and implement initial use cases that leverage Generative AI and Machine Learning technologies. Establish and oversee governance frameworks for AI applications to ensure compliance and ethical use. Collaborate with senior leadership to define the AI operating model and strategic direction. Foster a culture of ideation and innovation within the Axial team, encouraging exploration of new innovative technologies and methodologies. Architect and implement generative AI models tailored to produce structured outputs such as text or images. Research and apply advanced machine learning algorithms to improve model efficiency and accuracy. Create user-friendly applications using Streamlit Handle large datasets, perform data cleaning, and apply feature engineering to prepare data for model training. Work collaboratively with cross-functional teams to understand requirements and translate them into technical solutions. Optimize and fine-tune GPT models to improve output quality and efficiency. Conduct research and stay up to date with the latest advancements in AI, machine learning, and NLP to integrate new methodologies into existing processes. Debug and resolve issues in AI models and the associated infrastructure. Document technical specifications and processes to support ongoing development and enhancements. Support the deployment and integration of AI solutions into existing systems. Essential For The Role At least 5 years’ experience demonstrating technical skills in one or more of the following areas: Generative AI, machine learning, recommendation systems, pattern recognition, natural language vision or computer vision. Proficiency in using Streamlit to create interactive and visually appealing web applications. Extensive experience in software development including full SDLC Experience of developing and integrating applications using APIs Solid understanding of natural language processing (NLP) techniques and applications. Experience using computer vision techniques Experience with machine learning frameworks and libraries (TensorFlow, PyTorch, etc.). Strong software development skills, including python and scala. Experience with automation strategies (CI/CD etc) and containerisation (Kubernetes, Docker) is key. Excellent problem-solving skills and attention to detail. Excellent written and verbal communication, business analysis, and consultancy skills Desirable for the role Prior experience in Azure AI stack Knowledge of large language models like GPT, Claude, Gemini. Experience with version control systems like Git. Knowledge of agents and multi-agent systems, including agent-based modelling and simulation. Familiarity with decision-making algorithms and systems that exhibit goal-directed behaviour. Strong foundation in linear algebra, calculus, probability, and statistics. Ability to apply mathematical concepts to optimize models and interpret results Experience of multi-cloud cloud platform (AWS, Azure, GCP) is an advantage. Azure as primary experience Proficient working in high performance computing or cloud environment Experience in translating requirements into fit for purpose models, processing designs, data mappings and data profile reports Experience in working relevant domains e.g. Pharma, R&D, Manufacturing, Commercial, Finance, HR, Legal, Facilities etc. Why AstraZeneca? At Astrazeneca we’re dedicated to being a Great Place to Work. Where you are empowered to push the boundaries of science and unleash your entrepreneurial spirit. There’s no better place to make a difference to medicine, patients and society. An inclusive culture that champions diversity and collaboration, and always committed to lifelong learning, growth and development. We’re on an exciting journey to pioneer the future of healthcare. So, what’s next? Are you already imaging yourself joining our team? Good, because we can’t wait to hear from you. Are you ready to bring new ideas and fresh thinking to the table? Brilliant! We have one seat available and hope its yours If you’re curious to know more then we welcome your application no later than Where can I find out more? Our Social Media, Follow AstraZeneca on LinkedIn https://www.linkedin.com/company/1603/ Follow AstraZeneca on Facebook https://www.facebook.com/astrazenecacareers/ Follow AstraZeneca on Instagram https://www.instagram.com/astrazeneca_careers/?hl=en Date Posted 22-Jul-2025 Closing Date 03-Aug-2025 AstraZeneca embraces diversity and equality of opportunity. We are committed to building an inclusive and diverse team representing all backgrounds, with as wide a range of perspectives as possible, and harnessing industry-leading skills. We believe that the more inclusive we are, the better our work will be. We welcome and consider applications to join our team from all qualified candidates, regardless of their characteristics. We comply with all applicable laws and regulations on non-discrimination in employment (and recruitment), as well as work authorization and employment eligibility verification requirements.
Posted 1 week ago
5.0 - 9.0 years
0 Lacs
pune, maharashtra
On-site
In the modern banking age, financial institutions are required to bring Classical Data Drivers and Evolving Business Drivers together on a single platform. However, traditional data platforms face limitations in communicating with evolving business drivers due to technological constraints. A Modern Data Platform is essential to bridge this gap and elevate businesses to the next level through data-driven approaches, enabled by recent technology transformations. As a Technology leader with an academic background in Computer Science / Information Technology / Data Technologies [BE/BTech/MCA], you will have the opportunity to lead the Modern Data Platform Practice. This role involves providing solutions to customers on Traditional Datawarehouses across On-Prem and Cloud platforms. You will be responsible for architecting Data Platforms, defining Data engineering designs, selecting appropriate technologies and tools, and enhancing the organization's Modern Data Platform capabilities. Additionally, you will lead pre-sales discussions, provide technology architecture in RFP responses, and spearhead technology POC/MVP initiatives. To excel in this role, you are expected to possess the following qualifications and experiences: - 12-16 years of Data Engineering and analytics experience, including hands-on experience in Big Data systems across On-Prem and Cloud environments - Leadership in Data Platform architecture & design projects for mid to large size firms - Implementation experience with Batch Data and Streaming / Online data integrations using 3rd party tools and custom programs - Proficiency in SQL and one of the programming languages: Core Java / Scala / Python - Hands-on experience in Kafka for enabling Event-driven data pipes / processing - Knowledge of leading Data Services offered by AWS, Azure, Snowflake, Confluent - Strong understanding of distributed computing and related data structures - Implementation of Data Governance and Quality capabilities for Data Platforms - Analytical and presentation skills, along with the ability to build and lead teams - Exposure to leading RDBMS technologies and Data Visualization platforms - Demonstrated AI/ML models for Data Processing and generating Insights - Team player with the ability to work independently with minimal direction Your responsibilities at Oracle will be at Career Level - IC4, and the company values Diversity and Inclusion to foster innovation and excellence. Oracle offers a competitive suite of Employee Benefits emphasizing parity, consistency, and affordability, including Medical, Life Insurance, and Retirement Planning. The company encourages employees to contribute to the communities where they live and work. Oracle believes that innovation stems from diversity and inclusion, and is committed to creating a workforce where all individuals can thrive and contribute their best work. The company supports individuals with disabilities by providing reasonable accommodations throughout the job application, interview process, and in potential roles to ensure successful participation in crucial job functions. As a global leader in cloud solutions, Oracle is dedicated to leveraging tomorrow's technology to address today's challenges. The company values inclusivity and empowers its workforce to drive innovation and growth. Oracle careers offer opportunities for global engagement, work-life balance, and competitive benefits. The company is committed to promoting an inclusive workforce that supports opportunities for all individuals. If you require accessibility assistance or accommodation for a disability at any point during the employment process at Oracle, kindly reach out by emailing accommodation-request_mb@oracle.com or calling +1 888 404 2494 in the United States.,
Posted 1 week ago
5.0 - 9.0 years
0 Lacs
pune, maharashtra
On-site
Join us as a Big Data Engineer at Barclays, where you will spearhead the evolution of the digital landscape, driving innovation and excellence. You will harness cutting-edge technology to revolutionize digital offerings, ensuring unparalleled customer experiences. To be successful as a Big Data Engineer, you should have experience with: - Full Stack Software Development for large-scale, mission-critical applications. - Mastery in distributed big data systems such as Spark, Hive, Kafka streaming, Hadoop, Airflow. - Expertise in Scala, Java, Python, J2EE technologies, Microservices, Spring, Hibernate, REST APIs. - Experience with n-tier web application development and frameworks like Spring Boot, Spring MVC, JPA, Hibernate. - Proficiency with version control systems, preferably Git; GitHub Copilot experience is a plus. - Proficient in API Development using SOAP or REST, JSON, and XML. - Experience developing back-end applications with multi-process and multi-threaded architectures. - Hands-on experience with building scalable microservices solutions using integration design patterns, Dockers, Containers, and Kubernetes. - Experience in DevOps practices like CI/CD, Test Automation, Build Automation using tools like Jenkins, Maven, Chef, Git, Docker. - Experience with data processing in cloud environments like Azure or AWS. - Data Product development experience is essential. - Experience in Agile development methodologies like SCRUM. - Result-oriented with strong analytical and problem-solving skills. - Excellent verbal and written communication and presentation skills. You may be assessed on key critical skills relevant for success in the role, such as risk and controls, change and transformation, business acumen, strategic thinking, digital and technology, as well as job-specific technical skills. This role is for the Pune location. Purpose of the role: To design, develop, and improve software, utilizing various engineering methodologies, that provides business, platform, and technology capabilities for our customers and colleagues. Accountabilities: - Development and delivery of high-quality software solutions by using industry-aligned programming languages, frameworks, and tools. Ensuring that code is scalable, maintainable, and optimized for performance. - Cross-functional collaboration with product managers, designers, and other engineers to define software requirements, devise solution strategies, and ensure seamless integration and alignment with business objectives. - Collaboration with peers, participate in code reviews, and promote a culture of code quality and knowledge sharing. - Stay informed of industry technology trends and innovations and actively contribute to the organization's technology communities to foster a culture of technical excellence and growth. - Adherence to secure coding practices to mitigate vulnerabilities, protect sensitive data, and ensure secure software solutions. - Implementation of effective unit testing practices to ensure proper code design, readability, and reliability. Analyst Expectations: - Perform prescribed activities in a timely manner and to a high standard consistently driving continuous improvement. - Requires in-depth technical knowledge and experience in the assigned area of expertise. - Thorough understanding of the underlying principles and concepts within the area of expertise. - Lead and supervise a team, guiding and supporting professional development, allocating work requirements, and coordinating team resources. - If the position has leadership responsibilities, People Leaders are expected to demonstrate a clear set of leadership behaviors to create an environment for colleagues to thrive and deliver to a consistently excellent standard. - For an individual contributor, develop technical expertise in the work area, acting as an advisor where appropriate. - Will have an impact on the work of related teams within the area. - Partner with other functions and business areas. - Take responsibility for end results of a team's operational processing and activities. - Escalate breaches of policies/procedure appropriately. - Take responsibility for embedding new policies/procedures adopted due to risk mitigation. - Advise and influence decision-making within the own area of expertise. - Take ownership of managing risk and strengthening controls in relation to the work you own or contribute to. All colleagues will be expected to demonstrate the Barclays Values of Respect, Integrity, Service, Excellence, and Stewardship our moral compass, helping us do what we believe is right. They will also be expected to demonstrate the Barclays Mindset to Empower, Challenge, and Drive the operating manual for how we behave.,
Posted 1 week ago
3.0 - 7.0 years
0 Lacs
kolkata, west bengal
On-site
You are a Data Engineer with 3+ years of experience, proficient in SQL and Python development. You will be responsible for designing, developing, and maintaining scalable data pipelines to support ETL processes using tools like Apache Airflow, AWS Glue, or similar. Your role involves optimizing and managing relational and NoSQL databases such as MySQL, PostgreSQL, MongoDB, or Cassandra for high performance and scalability. You will write advanced SQL queries, stored procedures, and functions to efficiently extract, transform, and analyze large datasets. Additionally, you will implement and manage data solutions on cloud platforms like AWS, Azure, or Google Cloud, utilizing services such as Redshift, BigQuery, or Snowflake. Your contributions to designing and maintaining data warehouses and data lakes will support analytics and BI requirements. Automation of data processing tasks through script and application development in Python or other programming languages is also part of your responsibilities. As a Data Engineer, you will implement data quality checks, monitoring, and governance policies to ensure data accuracy, consistency, and security. Collaboration with data scientists, analysts, and business stakeholders to understand data needs and translate them into technical solutions is essential. Identifying and resolving performance bottlenecks in data systems, optimizing data storage, and retrieval are key aspects. Maintaining comprehensive documentation for data processes, pipelines, and infrastructure is crucial. Staying up-to-date with the latest trends in data engineering, big data technologies, and cloud services is expected from you. You should hold a Bachelors or Masters degree in Computer Science, Information Technology, Data Engineering, or a related field. Proficiency in SQL, relational databases, NoSQL databases, Python programming, and experience with data pipeline tools and cloud platforms is required. Knowledge of big data tools like Apache Spark, Hadoop, or Kafka is a plus. Strong analytical and problem-solving skills with a focus on performance optimization and scalability are essential. Excellent verbal and written communication skills are necessary to convey technical concepts to non-technical stakeholders. You should be able to work collaboratively in cross-functional teams. Preferred certifications include AWS Certified Data Analytics, Google Professional Data Engineer, or similar. An eagerness to learn new technologies and adapt quickly in a fast-paced environment is a mindset that will be valuable in this role.,
Posted 1 week ago
2.0 - 6.0 years
0 Lacs
pune, maharashtra
On-site
ZS is a place where passion changes lives. As a management consulting and technology firm focused on improving life and how we live it, our most valuable asset is our people. Here you'll work side-by-side with a powerful collective of thinkers and experts shaping life-changing solutions for patients, caregivers, and consumers, worldwide. ZSers drive impact by bringing a client-first mentality to each and every engagement. We partner collaboratively with our clients to develop custom solutions and technology products that create value and deliver company results across critical areas of their business. Bring your curiosity for learning; bold ideas; courage, and passion to drive life-changing impact to ZS. Our most valuable asset is our people. At ZS, we honor the visible and invisible elements of our identities, personal experiences, and belief systemsthe ones that comprise us as individuals, shape who we are, and make us unique. We believe your personal interests, identities, and desire to learn are part of your success here. Learn more about our diversity, equity, and inclusion efforts and the networks ZS supports to assist our ZSers in cultivating community spaces, obtaining the resources they need to thrive, and sharing the messages they are passionate about. What You'll Do Collaborate with client-facing teams to understand solution context and contribute to technical requirement gathering and analysis. Design and implement technical features leveraging best practices for the technology stack being used. Work with technical architects on the team to validate the design and implementation approach. Write production-ready code that is easily testable, understood by other developers, and accounts for edge cases and errors. Ensure the highest quality of deliverables by following architecture/design guidelines, coding best practices, and participating in periodic design/code reviews. Write unit tests as well as higher-level tests to handle expected edge cases and errors gracefully, as well as happy paths. Use bug tracking, code review, version control, and other tools to organize and deliver work. Participate in scrum calls and agile ceremonies, and effectively communicate work progress, issues, and dependencies. Consistently contribute to researching & evaluating the latest technologies through rapid learning, conducting proof-of-concepts, and creating prototype solutions. What You'll Bring Experience: 2+ years of relevant hands-on experience. CS foundation is a must. Strong command over distributed computing frameworks like Spark (preferred) or others. Strong analytical/problem-solving skills. Ability to quickly learn and become hands-on with new technology and be innovative in creating solutions. Strong in at least one of the Programming languages - Python or Java, Scala, etc., and Programming basics - Data Structures. Hands-on experience in building modules for data management solutions such as data pipeline, orchestration, ingestion patterns (batch, real-time). Experience in designing and implementing solutions on a distributed computing and cloud services platform (but not limited to) - AWS, Azure, GCP. Good understanding of RDBMS, with some experience on ETL is preferred. Additional Skills Understanding of DevOps, CI/CD, data security, experience in designing on a cloud platform. AWS Solutions Architect certification with an understanding of the broader AWS stack. Knowledge of data modeling and data warehouse concepts. Willingness to travel to other global offices as needed to work with clients or other internal project teams. Perks & Benefits ZS offers a comprehensive total rewards package including health and well-being, financial planning, annual leave, personal growth, and professional development. Our robust skills development programs, multiple career progression options, internal mobility paths, and collaborative culture empower you to thrive as an individual and global team member. We are committed to giving our employees a flexible and connected way of working. A flexible and connected ZS allows us to combine work from home and on-site presence at clients/ZS offices for the majority of our week. The magic of ZS culture and innovation thrives in both planned and spontaneous face-to-face connections. Travel Travel is a requirement at ZS for client-facing ZSers; business needs of your project and client are the priority. While some projects may be local, all client-facing ZSers should be prepared to travel as needed. Travel provides opportunities to strengthen client relationships, gain diverse experiences, and enhance professional growth by working in different environments and cultures. Considering Applying At ZS, we're building a diverse and inclusive company where people bring their passions to inspire life-changing impact and deliver better outcomes for all. We are most interested in finding the best candidate for the job and recognize the value that candidates with all backgrounds, including non-traditional ones, bring. If you are interested in joining us, we encourage you to apply even if you don't meet 100% of the requirements listed above. ZS is an equal opportunity employer and is committed to providing equal employment and advancement opportunities without regard to any class protected by applicable law. To Complete Your Application Candidates must possess or be able to obtain work authorization for their intended country of employment. An online application, including a full set of transcripts (official or unofficial), is required to be considered. NO AGENCY CALLS, PLEASE. Find Out More At www.zs.com,
Posted 1 week ago
3.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
About The Role We are seeking a skilled and passionate Data Engineer to join our team and drive the development of scalable data pipelines for Generative AI (GenAI) and Large Language Model (LLM)-powered applications. This role demands hands-on expertise in Spark, GCP, and data integration with modern AI APIs. What You'll Do Design and develop high-throughput, scalable data pipelines for GenAI and LLM-based solutions. Build robust ETL/ELT processes using Spark (PySpark/Scala) on Google Cloud Platform (GCP). Integrate enterprise and unstructured data with LLM APIs such as OpenAI, Gemini, and Hugging Face. Process and enrich large volumes of unstructured data, including text and document embeddings. Manage real-time and batch workflows using Airflow, Dataflow, and BigQuery. Implement and maintain best practices for data quality, observability, lineage, and API-first designs. What Sets You Apart 3+ years of experience building scalable Spark-based pipelines (PySpark or Scala). Strong hands-on experience with GCP services: BigQuery, Dataproc, Pub/Sub, Cloud Functions. Familiarity with LLM APIs, vector databases (e.g., Pinecone, FAISS), and GenAI use cases. Expertise in text processing, unstructured data handling, and performance optimization. Agile mindset and the ability to thrive in a fast-paced startup or dynamic environment. Nice To Have Experience working with embeddings and semantic search. Exposure to MLOps or data observability tools. Background in deploying production-grade AI/ML workflows (ref:hirist.tech)
Posted 1 week ago
2.0 - 6.0 years
0 Lacs
maharashtra
On-site
Join our dynamic Workforce Planning (WFP) team within Consumer and Community (CCB) Operations division and be part of a forward-thinking organization that leverages data science to optimize workforce efficiency. Contribute to innovative projects that drive impactful solutions for Chase's operations. As an Operations Research Analyst within the WFP Data Science team, you will tackle complex and high-impact projects. You will assess unstructured problems, develop practical strategies, and propose solutions to enhance decision-making processes. Collaborate with management to prioritize, time, and deliver next-gen operations research solutions. Your responsibilities will include designing and developing optimization models and simulation models, supporting OR projects either individually or as part of a project team, collaborating with stakeholders to understand business requirements and define solution objectives clearly, selecting appropriate methods to solve problems while staying updated on the latest OR methodologies, ensuring the robustness of mathematical solutions, developing and communicating recommendations and OR solutions in an easy-to-understand manner using data to tell a story, leading and persuading others to positively influence team efforts, and helping frame a business problem into a technical problem to achieve feasible solutions. To be considered for this role, you should possess a Master's Degree with 4+ years or a Doctorate (PhD) with 2+ years of experience in Operations Research, Industrial Engineering, Systems Engineering, Financial Engineering, Management Science, or related disciplines. You should have experience supporting OR projects with multiple team members, hands-on experience in developing simulation models, optimization models, and/or heuristics, a deep understanding of the math and theory behind Operations Research techniques, proficiency in Open Source Software (OSS) programming languages like Python, R, or Scala, experience with commercial solvers like GUROBI, CPLEX, XPRESS, or MOSEK, and familiarity with basic data table operations (SQL, Hive, etc.). Additionally, you should have demonstrated relationship building skills and a superior ability to make things happen through positive influence. Preferred qualifications include advanced expertise with Operations Research techniques, prior experience in building Reinforcement Learning Models, extensive knowledge of Stochastic Modelling, and previous experience in leading highly complex cross-functional technical projects with multiple stakeholders.,
Posted 1 week ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
39817 Jobs | Dublin
Wipro
19388 Jobs | Bengaluru
Accenture in India
15458 Jobs | Dublin 2
EY
14907 Jobs | London
Uplers
11185 Jobs | Ahmedabad
Amazon
10459 Jobs | Seattle,WA
IBM
9256 Jobs | Armonk
Oracle
9226 Jobs | Redwood City
Accenture services Pvt Ltd
7971 Jobs |
Capgemini
7704 Jobs | Paris,France