Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
10.0 years
8 - 10 Lacs
Hyderābād
On-site
Full-time Employee Status: Regular Role Type: Hybrid Department: Information Technology & Systems Schedule: Full Time Company Description Experian is a global data and technology company, powering opportunities for people and businesses around the world. We help to redefine lending practices, uncover and prevent fraud, simplify healthcare, create marketing solutions, and gain deeper insights into the automotive market, all using our unique combination of data, analytics and software. We also assist millions of people to realize their financial goals and help them save time and money. We operate across a range of markets, from financial services to healthcare, automotive, agribusiness, insurance, and many more industry segments. We invest in people and new advanced technologies to unlock the power of data. As a FTSE 100 Index company listed on the London Stock Exchange (EXPN), we have a team of 22,500 people across 32 countries. Our corporate headquarters are in Dublin, Ireland. Learn more at experianplc.com. Job Description Responsibilities: Team Leadership & Delivery: Manage, coach, and grow engineering teams responsible for backend services, data streaming, and API integrations within our fintech platform. Ensure successful delivery of end to end scalable, reliable, and secure systems built with Java, AWS cloud services along with Frontend Technologies (Mobile Native + Web). Collaborate with Engineers to review architecture, set technical direction, and evolve infrastructure based on business priorities. Provide oversight and support to engineers building microservices, reactive systems, and real-time data pipelines. Ensure technical deliverables align with non-functional requirements such as performance, compliance, privacy, and operational SLAs. Culture & Quality: Cultivate an environment of engineering ownership, psychological safety, and continuous improvement. Drive adoption of modern software engineering practices including test automation, CI/CD, API-first design, and secure coding. Lead by example in performing code reviews, advocating for clean architecture, and encouraging TDD and documentation discipline. Foster strong collaboration with Product, Design, Security, and DevOps to deliver robust, user-centric financial applications. Talent Development & Technical Guidance: Guide engineers in mastering tools like Gradle, JDK compatibility, and dependency optimization. Mentor team members on best practices in cloud-native development, reactive programming, and scalable system design. Identify and grow future leaders while maintaining a high bar for performance and inclusivity. Lead team hiring, onboarding, and career development efforts tailored to emerging engineering talent in the region. Operational Excellence: Monitor engineering metrics (velocity, quality, uptime, defect rate) and ensure adherence to agile workflows. Partner with engineering leadership to drive platform strategy and improve the developer experience. Stay current on regulatory and compliance considerations including GDPR, PCI, ISO 27001, and others relevant to fintech. Qualifications 10+ years of software development experience, with 3+ years managing engineering teams in fast-paced, agile environments. Strong technical background in Java, Spring Boot, AWS, and distributed systems architecture. Hands-on understanding of technologies like GraphQL, Kafka, DynamoDB, Lambda, Kinesis, and microservices. Proven track record of shipping high-impact products in regulated or high-availability domains such as fintech or e-commerce. Ability to lead teams working across real-time data systems, cloud infrastructure, and API ecosystems. Strong communication and interpersonal skills with a collaborative leadership style. Preferred Experience: Built and scaled engineering teams supporting financial services or other highly regulated platforms. Familiarity with mass-market consumer apps and retail-scale backend systems. Deep knowledge of compliance standards (PCI, HIPAA, CCPA, etc.) and secure development practices. Experience working with distributed teams across time zones and geographies. Additional Information Our uniqueness is that we celebrate yours. Experian's culture and people are important differentiators. We take our people agenda very seriously and focus on what matters; DEI, work/life balance, development, authenticity, collaboration, wellness, reward & recognition, volunteering... the list goes on. Experian's people first approach is award-winning; World's Best Workplaces™ 2024 (Fortune Top 25), Great Place To Work™ in 24 countries, and Glassdoor Best Places to Work 2024 to name a few. Check out Experian Life on social or our Careers Site to understand why. Experian is proud to be an Equal Opportunity and Affirmative Action employer. Innovation is an important part of Experian's DNA and practices, and our diverse workforce drives our success. Everyone can succeed at Experian and bring their whole self to work, irrespective of their gender, ethnicity, religion, colour, sexuality, physical ability or age. If you have a disability or special need that requires accommodation, please let us know at the earliest opportunity. Experian Careers - Creating a better tomorrow together
Posted 1 week ago
5.0 - 8.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Cortex is urgently hiring for the role : ''Data Engineer'' Experience: 5 to 8 years Location: Bangalore, Noida, and Hyderabad (Hybrid, weekly 2 Days office must) NP: Immediate to 10days only Key skills: Candidates Must have experience in Python, Kafka Stream, Pyspark, and Azure Databricks Role Overview We are looking for a highly skilled with expertise in Kafka, Python, and Azure Databricks (preferred) to drive our healthcare data engineering projects. The ideal candidate will have deep experience in real-time data streaming, cloud-based data platforms, and large-scale data processing. This role requires strong technical leadership, problem-solving abilities, and the ability to collaborate with cross-functional teams. Key Responsibilities Lead the design, development, and implementation of real-time data pipelines using Kafka, Python, and Azure Databricks. Architect scalable data streaming and processing solutions to support healthcare data workflows. Develop, optimize, and maintain ETL/ELT pipelines for structured and unstructured healthcare data. Ensure data integrity, security, and compliance with healthcare regulations (HIPAA, HITRUST, etc.). Collaborate with data engineers, analysts, and business stakeholders to understand requirements and translate them into technical solutions. Troubleshoot and optimize Kafka streaming applications, Python scripts, and Databricks workflows. Mentor junior engineers, conduct code reviews, and ensure best practices in data engineering. Stay updated with the latest cloud technologies, big data frameworks, and industry trends. If you are interested kindly send your resume to us by just clicking '' easy apply''. This job is posted by Aishwarya.K Business HR - Day recruitment Cortex Consultants LLC (US) | Cortex Consulting Pvt Ltd (India) | Tcell (Canada) US | India | Canada
Posted 1 week ago
4.0 years
0 Lacs
Telangana
On-site
We are looking for an experienced and motivated Senior Data Engineer to join our dynamic team. In this role, The role primarily focuses on MDM and associated ETL and real-time feeds monitoring and support. This engineer will be part of the global L1/L2 production support team , which is split between Chubb Engineering Centers in India and Mexico. Key responsibilities will include monitoring ETL processes, handling automated issues, and ensuring compliance with security policies. A good understanding of MDM Informatica and Data Factory is preferred The ideal candidate will have experience with Powercenter, MDM, Azure Data Factory, be able to identify and resolve data quality issues, proactively monitor production systems, performance bottlenecks, and other ETL-related problems. Responsibilities: Monitor ETL jobs including Powercenter/IICS, kafka based near real-time updates, batch processes. Troubleshoot production incidents Understands data mapping and data modeling methodologies including normal form, star, and snowflake to reduce data redundancy and improve data integrity. Maintains knowledge on current and emerging developments/trends for assigned area(s) of responsibility, assesses the impact, and collaborates with Scrum Team and Leadership to 4 Year/bachelor’s degree or equivalent work experience (4 years of experience in lieu of Bachelors)_ At least 5+ years of Strong understanding of ETL development concepts and tools such as ETL development solutions (e.g. Powercenter and/or IICS, Informatica MDM, Azure Data Factory, Snowflake) Experience with Data Warehousing and Business Intelligence concepts and technologies Knowledge of SQL and advanced programming languages such as Python and Java Demonstrated critical thinking skills and the ability to identify and resolve data quality issues, performance bottlenecks, and other ETL-related problems Experience with Agile methodologies and project-management skills Excellent communication and interpersonal skills 2+ years of experience in scheduling jobs using Autosys (or comparable distributed scheduler) 3+ years of experience writing Unix/Linux or Windows Scripts in tools such as PERL, Shell script, Python, etc. 3+ years of experience in creating complex technical specifications from business requirements/specifications
Posted 1 week ago
1.0 years
7 - 10 Lacs
Hyderābād
On-site
on Summary: As a global leader in providing intelligent information to the world’s most influential people, we depend on innovation to deliver unmatched quality and speed to our customers. We are seeking talented software engineers who possess a deep understanding of web development technologies as well as proven experience designing scalable systems for enterprise applications. This position will provide you with great opportunities to expand your technical skillset while working within a highly collaborative team environment. You will have the opportunity to work with cutting edge technology and be part of a rapidly growing organization. The successful candidate will play a key role in shaping our future success. About the role: Design, implement, test, deploy, maintain, and enhance features for the web application Collaborate with other developers and cross-functional teams (e.g., product management/ownership, user support) to ensure timely delivery of new functionality Participate in code reviews and sprint ceremonies Identify dependencies by communicating with other dev, data, infrastructure, and product management teams Actively participate in the continuous integration, continuous deployment pipeline for all stages of development Maintain accurate documentation throughout the development cycle Work closely with other developers to ensure code maintainability, readability, and reusability Ensure high code quality by writing unit and integration tests, and performing code reviews About you: Bachelor's degree or equivalent in Computer Science or related field; Master's degree preferred Around 1 year of experience developing back-end services and APIs with: Core Java, Spring Boot, JPA/Hibernate, PostgreSQL, Kafka, Elasticsearch, Redis, OpenShift, ArgoCD RESTful web services, JSON, XML WebSockets, WebRTC Unit testing, mocking, CI/CD pipelines Agile methodology Familiarity with front-end languages like JavaScript, HTML5, and CSS Strong communication skills, both written and verbal Ability to understand business requirements and translate them into technical solutions Detail-oriented mindset with strong analytical abilities Self-motivated individual who can thrive in a fast-paced environment Experience working in a Scrum environment preferred #LI-KP1 What’s in it For You? Hybrid Work Model: We’ve adopted a flexible hybrid working environment (2-3 days a week in the office depending on the role) for our office-based roles while delivering a seamless experience that is digitally and physically connected. Flexibility & Work-Life Balance: Flex My Way is a set of supportive workplace policies designed to help manage personal and professional responsibilities, whether caring for family, giving back to the community, or finding time to refresh and reset. This builds upon our flexible work arrangements, including work from anywhere for up to 8 weeks per year, empowering employees to achieve a better work-life balance. Career Development and Growth: By fostering a culture of continuous learning and skill development, we prepare our talent to tackle tomorrow’s challenges and deliver real-world solutions. Our Grow My Way programming and skills-first approach ensures you have the tools and knowledge to grow, lead, and thrive in an AI-enabled future. Industry Competitive Benefits: We offer comprehensive benefit plans to include flexible vacation, two company-wide Mental Health Days off, access to the Headspace app, retirement savings, tuition reimbursement, employee incentive programs, and resources for mental, physical, and financial wellbeing. Culture: Globally recognized, award-winning reputation for inclusion and belonging, flexibility, work-life balance, and more. We live by our values: Obsess over our Customers, Compete to Win, Challenge (Y)our Thinking, Act Fast / Learn Fast, and Stronger Together. Social Impact: Make an impact in your community with our Social Impact Institute. We offer employees two paid volunteer days off annually and opportunities to get involved with pro-bono consulting projects and Environmental, Social, and Governance (ESG) initiatives. Making a Real-World Impact: We are one of the few companies globally that helps its customers pursue justice, truth, and transparency. Together, with the professionals and institutions we serve, we help uphold the rule of law, turn the wheels of commerce, catch bad actors, report the facts, and provide trusted, unbiased information to people all over the world. About Us Thomson Reuters informs the way forward by bringing together the trusted content and technology that people and organizations need to make the right decisions. We serve professionals across legal, tax, accounting, compliance, government, and media. Our products combine highly specialized software and insights to empower professionals with the data, intelligence, and solutions needed to make informed decisions, and to help institutions in their pursuit of justice, truth, and transparency. Reuters, part of Thomson Reuters, is a world leading provider of trusted journalism and news. We are powered by the talents of 26,000 employees across more than 70 countries, where everyone has a chance to contribute and grow professionally in flexible work environments. At a time when objectivity, accuracy, fairness, and transparency are under attack, we consider it our duty to pursue them. Sound exciting? Join us and help shape the industries that move society forward. As a global business, we rely on the unique backgrounds, perspectives, and experiences of all employees to deliver on our business goals. To ensure we can do that, we seek talented, qualified employees in all our operations around the world regardless of race, color, sex/gender, including pregnancy, gender identity and expression, national origin, religion, sexual orientation, disability, age, marital status, citizen status, veteran status, or any other protected classification under applicable law. Thomson Reuters is proud to be an Equal Employment Opportunity Employer providing a drug-free workplace. We also make reasonable accommodations for qualified individuals with disabilities and for sincerely held religious beliefs in accordance with applicable law. More information on requesting an accommodation here. Learn more on how to protect yourself from fraudulent job postings here. More information about Thomson Reuters can be found on thomsonreuters.com.
Posted 1 week ago
0 years
3 - 6 Lacs
Hyderābād
On-site
We're hiring skilled backend developers to build and scale LiveDesign—our enterprise collaboration platform built on an event-driven microservices architecture with real-time stream processing. This platform enables the execution and analysis of quantum simulations, machine learning models, and other computational methods. It's used in diverse industries from drug researchers seeking to cure disease to materials designers in the fields of organic electronics, polymer science, and other areas. WHAT YOU’LL DO DAY-TO-DAY: Design, build, and test high-performance, distributed components in problem areas, including but not limited to data aggregation/transformation/reporting and large-scale computations for a collaborative multi-user application Architect and implement scalable, maintainable solutions using technologies like Kafka and Kubernetes. Contribute to a culture of clean code and continuous learning through regular code reviews. Collaborate closely within a cross-functional, agile team composed of product designers, developers, and testers to deliver features and functionality that meet business and product goals. WHO WE’RE LOOKING FOR: The ideal candidate should have: Bachelor's/master's degree in computer science or equivalent stream with three to six years of experience in enterprise application development. Practical understanding of CS concepts in the areas of data structures and algorithms, database management systems, operating systems, and computer networks. Excellent programming skills, logical reasoning abilities, and enthusiasm for solving interesting problems, along with a willingness to learn. Experience with event-driven microservices architecture and Kubernetes based deployments. Enthusiasm for solving interesting problems and a willingness to learn new technologies. Proficient interpersonal skills (oral/verbal communication), complemented by an ability to collaborate in a team environment. As an equal opportunity employer, Schrödinger hires outstanding individuals into every position in the company. People who work with us have a high degree of engagement, a commitment to working effectively in teams, and a passion for the company's mission. We place the highest value on creating a safe environment where our employees can grow and contribute, and refuse to discriminate on the basis of race, color, religious belief, sex, age, disability, national origin, alienage or citizenship status, marital status, partnership status, caregiver status, sexual and reproductive health decisions, gender identity or expression, or sexual orientation. To us, "diversity" isn't just a buzzword, but an important element of our core principles and key business practices. We believe that diverse companies innovate better and think more creatively than homogenous ones because they take into account a wide range of viewpoints. For us, greater diversity doesn't mean better headlines or public images - it means increased adaptability and profitability.
Posted 1 week ago
2.0 - 4.0 years
6 - 9 Lacs
Hyderābād
On-site
Summary As a Data Analyst, you will be responsible for Design, develop, and maintain efficient and scalable data pipelines for data ingestion, transformation, and storage. About the Role Location – Hyderabad #LI Hybrid About the Role: As a Data Analyst, you will be responsible for Design, develop, and maintain efficient and scalable data pipelines for data ingestion, transformation, and storage. Key Responsibilities: Design, develop, and maintain efficient and scalable data pipelines for data ingestion, transformation, and storage. Collaborate with cross-functional teams, including data analysts, business analyst and BI, to understand data requirements and design appropriate solutions. Build and maintain data infrastructure in the cloud, ensuring high availability, scalability, and security. Write clean, efficient, and reusable code in scripting languages, such as Python or Scala, to automate data workflows and ETL processes. Implement real-time and batch data processing solutions using streaming technologies like Apache Kafka, Apache Flink, or Apache Spark. Perform data quality checks and ensure data integrity across different data sources and systems. Optimize data pipelines for performance and efficiency, identifying and resolving bottlenecks and performance issues. Collaborate with DevOps teams to deploy, automate, and maintain data platforms and tools. Stay up to date with industry trends, best practices, and emerging technologies in data engineering, scripting, streaming data, and cloud technologies Essential Requirements: Bachelor's or Master's degree in Computer Science, Information Systems, or a related field with an overall experience of 2-4 Years. Proven experience as a Data Engineer or similar role, with a focus on scripting, streaming data pipelines, and cloud technologies like AWS, GCP or Azure. Strong programming and scripting skills in languages like Python, Scala, or SQL. Experience with cloud-based data technologies, such as AWS, Azure, or Google Cloud Platform. Hands-on experience with streaming technologies, such as AWS Streamsets, Apache Kafka, Apache Flink, or Apache Spark Streaming. Strong experience with Snowflake (Required) Proficiency in working with big data frameworks and tools, such as Hadoop, Hive, or HBase. Knowledge of SQL and experience with relational and NoSQL databases. Familiarity with data modelling and schema design principles. Strong problem-solving skills and the ability to work in a fast-paced, collaborative environment. Excellent communication and teamwork skills. Commitment to Diversity and Inclusion: Novartis is committed to building an outstanding, inclusive work environment and diverse teams' representative of the patients and communities we serve. Accessibility and accommodation: Novartis is committed to working with and providing reasonable accommodation to individuals with disabilities. If, because of a medical condition or disability, you need a reasonable accommodation for any part of the recruitment process, or in order to perform the essential functions of a position, please send an e-mail to diversityandincl.india@novartis.com and let us know the nature of your request and your contact information. Please include the job requisition number in your message Why Novartis: Helping people with disease and their families takes more than innovative science. It takes a community of smart, passionate people like you. Collaborating, supporting and inspiring each other. Combining to achieve breakthroughs that change patients’ lives. Ready to create a brighter future together? https://www.novartis.com/about/strategy/people-and-culture Join our Novartis Network: Not the right Novartis role for you? Sign up to our talent community to stay connected and learn about suitable career opportunities as soon as they come up: https://talentnetwork.novartis.com/network Benefits and Rewards: Read our handbook to learn about all the ways we’ll help you thrive personally and professionally: https://www.novartis.com/careers/benefits-rewards Division US Business Unit Universal Hierarchy Node Location India Site Hyderabad (Office) Company / Legal Entity IN10 (FCRS = IN010) Novartis Healthcare Private Limited Functional Area Marketing Job Type Full time Employment Type Regular Shift Work No Accessibility and accommodation Novartis is committed to working with and providing reasonable accommodation to individuals with disabilities. If, because of a medical condition or disability, you need a reasonable accommodation for any part of the recruitment process, or in order to perform the essential functions of a position, please send an e-mail to [email protected] and let us know the nature of your request and your contact information. Please include the job requisition number in your message. Novartis is committed to building an outstanding, inclusive work environment and diverse teams' representative of the patients and communities we serve.
Posted 1 week ago
3.0 years
6 - 7 Lacs
Hyderābād
On-site
Job Title: Data Engineer Total Experience: 3+ Years Location: Hyderabad Job Type: Contract Work Mode: On-site Notice Period: Immediate to 15 Days Work Timings: Monday to Friday, 10 am to 7 pm (IST) Interview Process Level 1: HR Screening (Personality Assessment) Level 2: Technical Round Level 3: Final Round (Note: The interview levels may vary) Company Overview Compileinfy Technology Solutions Pvt. Ltd. is a fast-growing IT services and consulting company delivering tailored digital solutions across industries. At Compileinfy, we promote a culture of ownership, critical thinking, and technological excellence. Job Summary We are seeking a highly motivated Data Engineer to join our expanding Data & AI team. This role offers the opportunity to design and develop robust, scalable data pipelines and infrastructure, ensuring the delivery of high-quality, timely, and accessible data throughout the organization. As a Data Engineer, you will collaborate across teams to build and optimize data solutions that support analytics, reporting, and business operations. The ideal candidate combines deep technical expertise, strong communication, and a drive for continuous improvement. Who You Are: Experienced in designing and building data pipelines for ingestion, transformation, and loading (ETL/ELT) of data from diverse sources to data warehouses or lakes. Proficient in SQL and at least one programming language, such as Python, Java, or Scala. Skilled at working with both relational databases (e.g., PostgreSQL, MySQL) and big data platforms (e.g., Hadoop, Spark, Hive, EMR). Competent in cloud environments (AWS, GCP, Azure), data lake, and data warehouse solutions. Comfortable optimizing and managing the quality, reliability, and timeliness of data flows. Ability to translate business requirements into technical specifications and collaborate effectively with stakeholders, including data scientists, analysts, and engineers. Detail-oriented, with strong documentation skills and a commitment to data governance, security, and compliance. Proactive, agile, and adaptable to a fast-paced environment with evolving business needs. What You Will Do: Design, build, and manage scalable ETL/ELT pipelines to ingest, transform, and deliver data efficiently from diverse sources to centralized repositories such as lakes or warehouses. Implement validation, monitoring, and cleansing procedures to ensure data consistency, integrity, and adherence to organizational standards. Develop and maintain efficient database architectures, optimize data storage, and streamline data integration flows for business intelligence and analytics. Work closely with data scientists, analysts, and business users to gather requirements and deliver tailored data solutions supporting business objectives. Document data models, dictionaries, pipeline architectures, and data flows to ensure transparency and knowledge sharing. Implement and enforce data security and privacy measures, ensuring compliance with regulatory requirements and best practices. Monitor, troubleshoot, and resolve issues in data pipelines and infrastructure to maintain high availability and performance. Preferred Qualifications: Bachelor’s or higher degree in Computer Science, Information Technology, Engineering, or a related field. 3-4years of experience in data engineering, ETL development, or related areas. Strong SQL and data modeling expertise with hands-on experience in data warehousing or business intelligence projects. Familiarity with AWS data integration tools (e.g., Glue, Athena), messaging/streaming platforms (e.g., Kafka, AWS MSK), and big data tools (Spark, Databricks). Proficiency with version control, testing, and deployment tools for maintaining code and ensuring best practices. Experience in managing data security, quality, and operational support in a production environment. What You Deliver Comprehensive data delivery documentation (data dictionary, mapping documents, models). Optimized, reliable data pipelines and infrastructure supporting the organization’s analytics and reporting needs. Operations support and timely resolution of data-related issues aligned with service level agreements. Interdependencies / Internal Engagement Actively engage with cross-functional teams to align on requirements, resolve issues, and drive improvements in data delivery, architecture, and business impact. Become a trusted partner in fostering a data-centric culture and ensuring the long-term scalability and integrity of our data ecosystem Why Join Us? At Compileinfy, we value innovation, collaboration, and professional growth. You'll have the opportunity to work on exciting, high-impact projects and be part of a team that embraces cutting-edge technologies. We provide continuous learning and career advancement opportunities in a dynamic, inclusive environment. Perks and Benefits Competitive salary and benefits package Flexible work environment Opportunities for professional development and training A supportive and collaborative team culture Application Process Submit your resume with the subject line: “Data Engineer Application – [Your Name]” to recruitmentdesk@compileinfy.com Job Types: Full-time, Contractual / Temporary Contract length: 12 months Pay: ₹600,000.00 - ₹700,000.00 per year Benefits: Health insurance Provident Fund Work Location: In person
Posted 1 week ago
0 years
7 - 9 Lacs
Hyderābād
On-site
India Information Technology (IT) Group Functions Job Reference # 322748BR City Hyderabad, Pune Job Type Full Time Your role Are you innovative and passionate about building secure and reliable solutions? We are looking for Tech Engineers specializing in either DevSecOps, Data Engineering or Full-Stack web development to join our team in building firmwide Data Observability Components on Azure. We are open to adapting the role suited to your career aspirations and skillset. Responsibilities include: Design/document, develop, review, test, release, support Data Observability components/platforms/environments. Contribute to agile ceremonies e.g. daily stand-ups, backlog refinement, iteration planning, iteration reviews, retrospectives. Comply with the firm’s applicable policies and processes. Collaborate with other teams and divisions using Data Observability services, related guilds and other Data Mesh Services teams. Ensure delivery deadlines are met. Your team You will be part of a diverse global team consisting of data scientists, data engineers, full-stack developers, DevSecOps engineers and knowledge engineers within Group CTO working primarily in a local team with some interactions with other teams and divisions. We are providing Data Observability services as part of our firmwide Data Mesh strategy to automate and scale data management to improve time-to-market for data and reduce data downtime. We provide learning opportunities and a varied technology landscape. Technologies include Azure Cloud, AI (ML and GenAI models), web user interface (React), data storage (Postgres, Azure), REST APIs, Kafka, Great Expectations, ontology models. Your expertise Experience in the following (or similar transferrable skills): Hands-on delivery in any of the following (or related): full-stack web development (e.g. React, APIs), data transformations, Spark, python, database design and development in any database, CI/CD pipelines, security risk mitigation, infrastructure as code (e.g. Terraform), monitoring, Azure development. Agile software practices and tools, performance testing, unit and integration testing. Identifying root-causes and designing and implementing the solution. Collaborating with other teams to achieve common goals. Learning and reskilling in new technologies. About us UBS is the world’s largest and the only truly global wealth manager. We operate through four business divisions: Global Wealth Management, Personal & Corporate Banking, Asset Management and the Investment Bank. Our global reach and the breadth of our expertise set us apart from our competitors. We have a presence in all major financial centers in more than 50 countries. How we hire We may request you to complete one or more assessments during the application process. Learn more Join us At UBS, we know that it's our people, with their diverse skills, experiences and backgrounds, who drive our ongoing success. We’re dedicated to our craft and passionate about putting our people first, with new challenges, a supportive team, opportunities to grow and flexible working options when possible. Our inclusive culture brings out the best in our employees, wherever they are on their career journey. We also recognize that great work is never done alone. That’s why collaboration is at the heart of everything we do. Because together, we’re more than ourselves. We’re committed to disability inclusion and if you need reasonable accommodation/adjustments throughout our recruitment process, you can always contact us. Disclaimer / Policy statements UBS is an Equal Opportunity Employer. We respect and seek to empower each individual and support the diverse cultures, perspectives, skills and experiences within our workforce.
Posted 1 week ago
3.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Job Description We are looking for a skilled Software Engineer with 3+ years of hands-on experience in backend and frontend technologies. The role involves building scalable applications using Node.js/NestJS, Python, TypeScript, MongoDB, and front-end frameworks like React and Angular. A solid grasp of computer science fundamentals, system design, and experience with Kafka, GitHub/GitLab, CI/CD pipelines, and working in Agile teams is expected. You’ll be part of a cross-functional team, contributing to the development of scalable, high-quality applications in an Agile environment, and collaborating with a US-based team when needed. Responsibilities Develop and maintain scalable backend services using Node.js/NestJS, Python, and MongoDB. Build user interfaces using React and Angular. Design efficient APIs and implement complex business logic. Work with Kafka and event-driven microservices. Manage version control with GitHub/GitLab and understanding of CI/CD pipelines or Github Actions. Follow best practices in code quality, testing, and deployment. Collaborate with cross-functional teams and participate in Agile processes via Jira. Align with US-based teams on collaboration and delivery schedules. Qualifications Bachelor’s degree in Computer Science, Engineering, or related field. Minimum 3 years of professional development experience. Proficient in Node.js, NestJS, Python, TypeScript, and MongoDB. Experience with React, Angular, and RESTful API development. Solid understanding of data structures, algorithms, and design patterns. Experience with Kafka, event-driven systems, GitHub/GitLab, CI/CD pipelines and cloud platforms (AWS/GCP/Azure). Familiar with Agile methodology and tools like Jira. Strong communication skills and ability to work with globally distributed teams.
Posted 1 week ago
0 years
0 Lacs
Hyderābād
On-site
India Information Technology (IT) Group Functions Job Reference # 322747BR City Hyderabad Job Type Full Time Your role Are you innovative and passionate about building secure and reliable solutions? We are looking for Tech Engineers specializing in either DevSecOps, Data Engineering or Full-Stack web development to join our team in building firmwide Data Observability Components on Azure. We are open to adapting the role suited to your career aspirations and skillset. Responsibilities include: Design/document, develop, review, test, release, support Data Observability components/platforms/environments. Contribute to agile ceremonies e.g. daily stand-ups, backlog refinement, iteration planning, iteration reviews, retrospectives. Comply with the firm’s applicable policies and processes. Collaborate with other teams and divisions using Data Observability services, related guilds and other Data Mesh Services teams. Ensure delivery deadlines are met. Your team You will be part of a diverse global team consisting of data scientists, data engineers, full-stack developers, DevSecOps engineers and knowledge engineers within Group CTO working primarily in a local team with some interactions with other teams and divisions. We are providing Data Observability services as part of our firmwide Data Mesh strategy to automate and scale data management to improve time-to-market for data and reduce data downtime. We provide learning opportunities and a varied technology landscape. Technologies include Azure Cloud, AI (ML and GenAI models), web user interface (React), data storage (Postgres, Azure), REST APIs, Kafka, Great Expectations, ontology models. Your expertise Experience in the following (or similar transferrable skills): Hands-on delivery in any of the following (or related): full-stack web development (e.g. React, APIs), data transformations, Spark, python, database design and development in any database, CI/CD pipelines, security risk mitigation, infrastructure as code (e.g. Terraform), monitoring, Azure development. Agile software practices and tools, performance testing, unit and integration testing. Identifying root-causes and designing and implementing the solution. Collaborating with other teams to achieve common goals. Learning and reskilling in new technologies. About us UBS is the world’s largest and the only truly global wealth manager. We operate through four business divisions: Global Wealth Management, Personal & Corporate Banking, Asset Management and the Investment Bank. Our global reach and the breadth of our expertise set us apart from our competitors. We have a presence in all major financial centers in more than 50 countries. How we hire We may request you to complete one or more assessments during the application process. Learn more Join us At UBS, we know that it's our people, with their diverse skills, experiences and backgrounds, who drive our ongoing success. We’re dedicated to our craft and passionate about putting our people first, with new challenges, a supportive team, opportunities to grow and flexible working options when possible. Our inclusive culture brings out the best in our employees, wherever they are on their career journey. We also recognize that great work is never done alone. That’s why collaboration is at the heart of everything we do. Because together, we’re more than ourselves. We’re committed to disability inclusion and if you need reasonable accommodation/adjustments throughout our recruitment process, you can always contact us. Disclaimer / Policy statements UBS is an Equal Opportunity Employer. We respect and seek to empower each individual and support the diverse cultures, perspectives, skills and experiences within our workforce.
Posted 1 week ago
0 years
0 Lacs
Hyderābād
On-site
India Information Technology (IT) Group Functions Job Reference # 322746BR City Hyderabad, Pune Job Type Full Time Your role Are you innovative and passionate about building secure and reliable solutions? We are looking for Data Engineers and DevSecOps Engineers to join our team in building the Enterprise Data Mesh at UBS. We are open to adapting the role suited to your career aspirations and skillset. Responsibilities include: Design/document, develop, review, test, release, support Data Mesh components/platforms/environments. Contribute to agile ceremonies e.g. daily stand-ups, backlog refinement, iteration planning, iteration reviews, retrospectives. Comply with the firm’s applicable policies and processes. Collaborate with other teams and divisions using Data Mesh services, related guilds and other Data Mesh Services teams. Ensure delivery deadlines are met. Your team You will be part of a diverse global team consisting of data scientists, data engineers, full-stack developers, DevSecOps engineers and knowledge engineers within Group CTO working primarily in a local team with some interactions with other teams and divisions. We are providing many services as part of our Data Mesh strategy firmwide to automate and scale data management to improve time-to-market for data and reduce data downtime. We provide learning opportunities and a varied technology landscape. Technologies include Azure Cloud, AI (ML and GenAI models), web user interface (React), data storage (Postgres, Azure), REST APIs, Kafka, Great Expectations, ontology models. Your expertise Experience in the following (or similar transferrable skills): Hands-on delivery in any of the following (or related): data transformations, Spark, python, database design and development in any database, CI/CD pipelines, security risk mitigation, infrastructure as code (e.g. Terraform), monitoring, Azure development. Agile software practices and tools, performance testing, unit and integration testing. Identifying root-causes and designing and implementing the solution. Collaborating with other teams to achieve common goals. Learning and reskilling in new technologies. About us UBS is the world’s largest and the only truly global wealth manager. We operate through four business divisions: Global Wealth Management, Personal & Corporate Banking, Asset Management and the Investment Bank. Our global reach and the breadth of our expertise set us apart from our competitors. We have a presence in all major financial centers in more than 50 countries. How we hire We may request you to complete one or more assessments during the application process. Learn more Join us At UBS, we know that it's our people, with their diverse skills, experiences and backgrounds, who drive our ongoing success. We’re dedicated to our craft and passionate about putting our people first, with new challenges, a supportive team, opportunities to grow and flexible working options when possible. Our inclusive culture brings out the best in our employees, wherever they are on their career journey. We also recognize that great work is never done alone. That’s why collaboration is at the heart of everything we do. Because together, we’re more than ourselves. We’re committed to disability inclusion and if you need reasonable accommodation/adjustments throughout our recruitment process, you can always contact us. Disclaimer / Policy statements UBS is an Equal Opportunity Employer. We respect and seek to empower each individual and support the diverse cultures, perspectives, skills and experiences within our workforce.
Posted 1 week ago
12.0 years
1 - 8 Lacs
Cochin
On-site
Job Information We are looking for a highly skilled and experienced .NET Architect to lead the design, development, and deployment of enterprise-grade applications using Microsoft technologies. The ideal candidate will have deep expertise in .NET architecture, cloud computing, microservices, and secure API development. You will collaborate with cross-functional teams to drive innovation, scalability, and performance. Your Responsibilities Design end-to-end architecture for scalable and maintainable enterprise applications using .NET (Core/Framework). Provide technical leadership and guidance to development teams, ensuring adherence to best practices. Define architectural standards, design patterns, and governance processes. Lead solution design using Microservices, Clean Architecture, and Domain-Driven Design (DDD). Review code and architecture, ensuring quality, performance, and security compliance. Architect and deploy applications on Azure (App Services, Functions, API Gateway, Key Vault, etc.). · Collaborate with product owners, business analysts, and stakeholders to convert business needs into technical solutions.· Implement DevOps pipelines for continuous integration and deployment (CI/CD) using Azure DevOps or GitHub Actions. Oversee security architecture including authentication (OAuth 2.0, OpenID Connect) and data protection. Develop proof-of-concepts (PoCs) and technical prototypes to validate solution approaches. Required Skills 12+ years of experience in software development using Microsoft technologies. · 3+ years in an architectural or senior design role.· Proficiency in C#, ASP.NET Core, Web API, Entity Framework, LINQ.· Strong experience in Microservices architecture and distributed systems. Expertise in Azure services (App Services, Azure Functions, Blob Storage, Key Vault, etc.) · Hands-on with CI/CD, DevOps, Docker, Kubernetes.· Deep understanding of SOLID principles, design patterns, and architectural best practices. Experience in secure coding practices and API security (JWT, OAuth2, IdentityServer). Strong background in relational and NoSQL databases (SQL Server, Cosmos DB, MongoDB). Excellent communication, leadership, and documentation skills. Preferred Qualifications Microsoft Certified: Azure Solutions Architect Expert or equivalent certification. Experience with frontend frameworks (React, Angular, Blazor) is a plus. Knowledge of event-driven architecture and message queues (e.g., Kafka, RabbitMQ). Exposure to Infrastructure as Code (Terraform, ARM, Bicep). Experience working in Agile/Scrum environments. Experience 12+ Years Work Location Kochi Work Type Full Time Please send your resume to careers@cabotsolutions.com
Posted 1 week ago
5.0 years
0 Lacs
Cochin
On-site
Job Position: Magento Developer Location: Kochi/Bangalore Experience: 5-15years Mandate: Magento1 and Magento 2 experience along with Migration familiar in RESTful APIs and GraphQL Experience in headless architecture Strong knowledge in Magento Indexing & Caching. Experience in customization using 3rd party search module Job Description: Overall 3+ years of experience working on Magento / Adobe Commerce Cloud. Prefer someone with over 5 years of experience in various capacities in Retail Domain . Deep Knowledge in Magento 2 +, preferring a full stack mindset Should have a good understanding of all sub-systems in eCommerce including User Management, Catalog / Product / Browse / Search, Promotions & Pricing, Payments, Cart & Checkout, Tax, Address validations, Checkout, Place Order, Backend jobs and processes etc. Prefer someone working on a composable paradigm with knowledge of disparate components for CMS (AEM, Contentful etc), Search (Constructor, Bloomreach etc), Loyalty, PWA for experience layer, International Shipping etc Able to build custom reusable modules from scratch Deep understanding of Magento 2 architecture and best practices. Should be familiar in RESTful APIs and GraphQL Capable of extending GraphQL schemas for custom modules. Strong knowledge in Magento Indexing & Caching Proven experience in writing and managing backend batch jobs, data syncs and cron-based processes. Create and optimize custom scheduled jobs and asynchronous background processes (e.g., order sync, catalog imports). Solid MySQL and database schema design experience, including indexing and optimization. Optimize database queries, indexing strategies, and backend performance across Magento and related services. Proficient in developing and consuming REST/SOAP APIs. Recommended to have experience with message queues (RabbitMQ, Kafka, or similar). Third-party Service Integration – Prefer someone with experience in integration aspects including ERPs, CRMs, OMS, Payment Gateways etc. Experience in working with multi-website/multi -store/store-views/brands with support to multi-language & multi-currency Proficient in PHP and MySQL Exposure to headless architecture or PWA Studio is an advantage. Good grasp of Agile/Scrum methodologies and tools like Jira. Collaborate with cross-functional teams including UI/UX designers, product managers, and QA to ensure quality and timely delivery. Optimize site performance and scalability; perform code reviews and ensure coding standards. Troubleshoot and resolve complex technical issues in a timely manner. Recommend to have someone with Adobe certification (Professional / Expert) Experience in test-driven development (TDD), integration testing, and end-to-end testing using Junit, Mockito, RestAssured, etc. Experience with Continuous Integration Delivery models such as Azure DevOps, including Git, CI/CD pipelines and IaC Good to Have Skills: Demonstrable understanding of infrastructure and application security management, in the context of developing and operating large-scale multi-tenant systems Broad knowledge of contemporary technologies and frameworks blended with experience of working with relevant ones (RESTful web services, database) Job Type: Full-time Pay: ₹269,271.01 - ₹2,590,380.65 per year Work Location: In person
Posted 1 week ago
4.0 years
7 - 9 Lacs
Gurgaon
On-site
As the global leader in high-speed connectivity, Ciena is committed to a people-first approach. Our teams enjoy a culture focused on prioritizing a flexible work environment that empowers individual growth, well-being, and belonging. We’re a technology company that leads with our humanity—driving our business priorities alongside meaningful social, community, and societal impact. How You Will Contribute: As a Senior Software Developer within the Blue Planet team, you will play a key role in designing, developing, testing, and supporting scalable software solutions tailored for carrier-class networks and cloud environments. This role requires a strong technical foundation, attention to detail, and a collaborative mindset to deliver high-quality, modular code that is built to scale and last. You will: Work closely with cross-functional teams to design and develop high-performing software modules and features. Write and maintain backend and frontend code with strong emphasis on quality, performance, and maintainability. Support system design, documentation, and end-to-end development including unit testing and debugging. Participate in global agile development teams to deliver against project priorities and milestones. Contribute to the development of telecom inventory management solutions integrated with cloud platforms and advanced network technologies. The Must Haves: Bachelor's or Master’s degree in Computer Science, Engineering, or a related technical field. 4+ years of software development experience. Backend: Java 11+, Spring (Security, Data, MVC), SpringBoot, J2EE, Maven, JUnit. Frontend: TypeScript, JavaScript, Angular 2+, HTML, CSS, SVG, Protractor, Jasmine. Databases: Neo4j (Graph DB), PostgreSQL, TimescaleDB. Experience with SSO implementations (LDAP, SAML, OAuth2). Proficiency with Docker, Kubernetes, and cloud platforms (preferably AWS). Strong understanding of algorithms, data structures, and software design patterns. Assets: Experience with ElasticSearch, Camunda/BPMN, Drools, Kafka integration. Knowledge of RESTful APIs using Spring MVC. Knowledge in Inventory Management Systems (e.g., Cramer, Granite, Metasolv). Familiarity with tools like Node.js, Gulp, and build/test automation. Exposure to telecom/networking technologies such as DWDM/OTN, SONET, MPLS, GPON, FTTH. Understanding of OSS domains and exposure to telecom network/service topology and device modeling. Prior experience working in a global, agile development environment. #LI-FA Not ready to apply? Join our Talent Community to get relevant job alerts straight to your inbox. At Ciena, we are committed to building and fostering an environment in which our employees feel respected, valued, and heard. Ciena values the diversity of its workforce and respects its employees as individuals. We do not tolerate any form of discrimination. Ciena is an Equal Opportunity Employer, including disability and protected veteran status. If contacted in relation to a job opportunity, please advise Ciena of any accommodation measures you may require.
Posted 1 week ago
2.0 years
4 - 10 Lacs
Gurgaon
On-site
Expedia Group brands power global travel for everyone, everywhere. We design cutting-edge tech to make travel smoother and more memorable, and we create groundbreaking solutions for our partners. Our diverse, vibrant, and welcoming community is essential in driving our success. Why Join Us? To shape the future of travel, people must come first. Guided by our Values and Leadership Agreements, we foster an open culture where everyone belongs, differences are celebrated and know that when one of us wins, we all win. We provide a full benefits package, including exciting travel perks, generous time-off, parental leave, a flexible work model (with some pretty cool offices), and career development resources, all to fuel our employees' passion for travel and ensure a rewarding career journey. We’re building a more open world. Join us. Why Join Us? Are you an technologist who is passionate about building robust, scalable, and performant applications & data products? This is exactly what we do, join Data Engineering & Tooling Team! Data Engineering & Tooling Team (part of Enterprise Data Products at Expedia) is responsible for making traveler, partner & supply data accessible, unlocking insights and value! Our Mission is to build and manage the travel industry's premier Data Products and SDKs. Software Development Engineer II Introduction to team Our team is looking for an Software Engineer who applies engineering principles to build & improve existing systems. We follow Agile principles, and we're proud to offer a dynamic, diverse and collaborative environment where you can play an impactful role and build your career. Would you like to be part of a Global Tech company that does Travel? Don't wait, Apply Now! In this role, you will - Implement products and solutions that are highly scalable with high-quality, clean, maintainable, optimized, modular and well-documented code across the technology stack. [OR - Writing code that is clean, maintainable, optimized, modular.] Crafting API's, developing and testing applications and services to ensure they meet design requirements. Work collaboratively with all members of the technical staff and other partners to build and ship outstanding software in a fast-paced environment. Applying knowledge of software design principles and Agile methodologies & tools. Resolve problems and roadblocks as they occur with help from peers or managers. Follow through on details and drive issues to closure. Assist with supporting production systems (investigate issues and working towards resolution). Experience and qualifications: Bachelor's degree or Masters in Computer Science & Engineering, or a related technical field; or equivalent related professional experience. 2+ years of software development or data engineering experience in an enterprise-level engineering environment. Proficient with Object Oriented Programming concepts with a strong understanding of Data Structures, Algorithms, Data Engineering (at scale), and Computer Science fundamentals. Experience with Java, Scala, Spring framework, Micro-service architecture, Orchestration of containerized applications along with a good grasp of OO design with strong design patterns knowledge. Solid understanding of different API types (e.g. REST, GraphQL, gRPC), access patterns and integration. Prior knowledge & experience of NoSQL databases (e.g. ElasticSearch, ScyllaDB, MongoDB). Prior knowledge & experience of big data platforms, batch processing (e.g. Spark, Hive), stream processing (e.g. Kafka, Flink) and cloud-computing platforms such as Amazon Web Services. Knowledge & Understanding of monitoring tools, testing (performance, functional), application debugging & tuning. Good communication skills in written and verbal form with the ability to present information in a clear and concise manner. Accommodation requests If you need assistance with any part of the application or recruiting process due to a disability, or other physical or mental health conditions, please reach out to our Recruiting Accommodations Team through the Accommodation Request. We are proud to be named as a Best Place to Work on Glassdoor in 2024 and be recognized for award-winning culture by organizations like Forbes, TIME, Disability:IN, and others. Expedia Group's family of brands includes: Brand Expedia®, Hotels.com®, Expedia® Partner Solutions, Vrbo®, trivago®, Orbitz®, Travelocity®, Hotwire®, Wotif®, ebookers®, CheapTickets®, Expedia Group™ Media Solutions, Expedia Local Expert®, CarRentals.com™, and Expedia Cruises™. © 2024 Expedia, Inc. All rights reserved. Trademarks and logos are the property of their respective owners. CST: 2029030-50 Employment opportunities and job offers at Expedia Group will always come from Expedia Group’s Talent Acquisition and hiring teams. Never provide sensitive, personal information to someone unless you’re confident who the recipient is. Expedia Group does not extend job offers via email or any other messaging tools to individuals with whom we have not made prior contact. Our email domain is @expediagroup.com. The official website to find and apply for job openings at Expedia Group is careers.expediagroup.com/jobs. Expedia is committed to creating an inclusive work environment with a diverse workforce. All qualified applicants will receive consideration for employment without regard to race, religion, gender, sexual orientation, national origin, disability or age.
Posted 1 week ago
0 years
0 Lacs
India
Remote
Company Description At Trigonal AI, we specialize in building and managing end-to-end data ecosystems that empower businesses to make data-driven decisions with confidence. From data ingestion to advanced analytics, we offer the expertise and technology to transform data into actionable insights. Our core services include data pipeline orchestration, real-time analytics, and business intelligence & visualization. We use modern technologies such as Apache Airflow, Kubernetes, Apache Druid, Kafka, and leading BI tools to create reliable and scalable solutions. Let us help you unlock the full potential of your data. Role Description This is a full-time remote role for a Business Development Specialist. The specialist will focus on day-to-day tasks including lead generation, market research, customer service, and communication with potential clients. The role also includes analytical tasks and collaborating with the sales and marketing teams to develop and implement growth strategies. Qualifications Strong Analytical Skills for data-driven decision-making Effective Communication skills for engaging with clients and team members Experience in Lead Generation and Market Research Proficiency in Customer Service to maintain client relationships Proactive and independent work style Experience in the tech or data industry is a plus Bachelor's degree in Business, Marketing, or related field
Posted 1 week ago
7.0 years
21 Lacs
Gurgaon
On-site
Job Title: Data Engineer Location: Gurgaon (Onsite) Experience: 7+ Years Employment Type: Contract 6 month Job Description: We are seeking a highly experienced Data Engineer with a strong background in building scalable data solutions using Azure/AWS Databricks , Scala/Python , and Big Data technologies . The ideal candidate should have a solid understanding of data pipeline design, optimization, and cloud-based deployments. Key Responsibilities: Design and build data pipelines and architectures on Azure or AWS Optimize Spark queries and Databricks workloads Manage structured/unstructured data using best practices Implement scalable ETL processes with tools like Airflow, Kafka, and Flume Collaborate with cross-functional teams to understand and deliver data solutions Required Skills: Azure/AWS Databricks Python / Scala / PySpark SQL, RDBMS Hive / HBase / Impala / Parquet Kafka, Flume, Sqoop, Airflow Strong troubleshooting and performance tuning in Spark Qualifications: Bachelor’s degree in IT, Computer Science, Software Engineering, or related Minimum 7+ years of experience in Data Engineering/Analytics Apply Now if you're looking to join a dynamic team working with cutting-edge data technologies! Job Type: Contractual / Temporary Contract length: 6 months Pay: From ₹180,000.00 per month Work Location: In person
Posted 1 week ago
2.0 years
8 - 9 Lacs
Gurgaon
On-site
Overview: The role will play a pivotal role in software development activities and collaboration across the Strategy & Transformation (S&T) organization. Software Engineering is the cornerstone of scalable digital transformation across PepsiCo’s value chain. Work across the full stack, building highly scalable distributed solutions that enable positive user experiences. The role requires to deliver the best possible software solutions, customer obsessed and ensure they are generating incremental value. The engineer is expected to work closely with the user experience, product, IT, and process engineering teams to develop new products and prioritize deliver solutions across S&T core priorities. The ideal candidate should have foundational knowledge of both front-end and back-end technologies, a passion for learning, and the ability to work in a collaborative environment. Responsibilities: Assist in designing, developing, and maintaining scalable web applications. Collaborate with senior developers and designers to implement features from concept to deployment. Work on both front-end (React, Angular, Vue.js, etc.) and back-end (Node.js, Python, Java etc.) development tasks. Develop and consume RESTful APIs and integrate third-party services. Participate in code reviews, testing, and bug fixing. Write clean, maintainable, and well-documented code. Stay updated on emerging technologies and industry best practices. Qualifications: Minimum Qualifications: A Bachelor’s Degree in Computer Science or a related field 2+ years of relevant software development. Commanding knowledge of data structures, algorithms, and object-oriented design. Strong system design fundamentals and experience building distributed scalable systems. Expertise in Java and its related technologies. Restful or GraphQL API (preferred) experience. Expertise in Java and Spring / SpringBoot ecosystem, JUnit, BackEnd MicroServices, Serverless Computing. Experience with JavaScript/TypeScript, Node.js, React or React Native or related frameworks. Experience with large scale messaging systems such as Kafka is a bonus. Experience is non SQL DB is good to have. Hands on experience with any cloud platform such as AWS or GCP or Azure (preferred). Qualities Strong attention to detail and extremely well-organized Ability to work cross functionally with product, service design and operations across the organization. Demonstrated passion for excellence with respect to Engineering services, education, and support. Strong interpersonal skills, ability to navigate through a complex and matrixed internal environment. Ability to work collaboratively with regional and global partners in other functional units.
Posted 1 week ago
8.0 years
0 Lacs
Pune, Maharashtra, India
On-site
Job Reference # 323129BR Job Type Full Time Your role The individual in this role will be accountable for successful and timely delivery of projects in an agile environment where digital products are designed and built using cutting-edge technology for WMA clients and Advisors.. It is a devops role that entails working with teams located in Budapest – Hungary, Wroclaw - Poland, Pune - India and New Jersey, US. This role will include, but not be limited to, the following: maintain and build ci/cd pipelines migrate applications to cloud environment build scripts and dashboards for monitoring health of application build tools to reduce occurrence of errors and improve customer experience deployment of changes in prod and non-prod environments follow release management processes for application releases maintain stability of non-prod environments work with development, qa and support groups in trouble shooting environment issues Your team You'll be working as an engineering leader in the Client Data and Onboarding Team in India. We are responsible for WMA (Wealth Management Americas) client facing technology applications. This leadership role entails working with teams in US and India. You will play an important role of ensuring scalable development methodology is followed across multiple teams and participate in strategy discussions with business, and technology strategy discussions with architects. Our culture centers around innovation, partnership, transparency, and passion for the future. Diversity helps us grow, together. That’s why we are committed to fostering and advancing diversity, equity, and inclusion. It strengthens our business and brings value to our clients. Your expertise should carry 8+ years of experience to develop, build and maintain gitlab CI/CD pipelines use containerization technologies, orchestration tools (k8s), build tools (maven, gradle), VCS (gitlab), Sonar, Fortify tools to build robust deploy and release infrastructure deploy changes in prod and non prod Azure cloud infrastructure using helm, terraform, ansible and setup appropriate observability measures build scripts (bash, python, puppet) and dashboards for monitoring health of applications (AppDynamics, Splunk, AppInsights) possess basic networking knowledge (load balancing, ssh, certificates), middleware knowledge (MQ, Kafka, Azure Service Bus, Event hub) follow release management processes for application releases Maintain stability of non-prod environments Work with development, QA and support groups in trouble shooting environment issues About Us UBS is the world’s largest and the only truly global wealth manager. We operate through four business divisions: Global Wealth Management, Personal & Corporate Banking, Asset Management and the Investment Bank. Our global reach and the breadth of our expertise set us apart from our competitors. We have a presence in all major financial centers in more than 50 countries. How We Hire We may request you to complete one or more assessments during the application process. Learn more Join us At UBS, we know that it's our people, with their diverse skills, experiences and backgrounds, who drive our ongoing success. We’re dedicated to our craft and passionate about putting our people first, with new challenges, a supportive team, opportunities to grow and flexible working options when possible. Our inclusive culture brings out the best in our employees, wherever they are on their career journey. We also recognize that great work is never done alone. That’s why collaboration is at the heart of everything we do. Because together, we’re more than ourselves. We’re committed to disability inclusion and if you need reasonable accommodation/adjustments throughout our recruitment process, you can always contact us. Disclaimer / Policy Statements UBS is an Equal Opportunity Employer. We respect and seek to empower each individual and support the diverse cultures, perspectives, skills and experiences within our workforce.
Posted 1 week ago
0 years
0 Lacs
Pune, Maharashtra, India
On-site
Job Reference # 322638BR Job Type Full Time Your role Do you have proven track record of building scalable application to support Firmwide data distribution infrastructure? Are you confident at iteratively refining user requirements and removing any ambiguity? Do you want to design and build best-in-class, enterprise scale applications using the latest technologies? Develop web services and share development expertise about best practices Implement solutions which is scalable by applying right design principles & UBS practices Work with leaders and deliver the requirements Collaborate to refine user requirements Analyze root causes of incidents, document and provide answers Apply methodical approach to software solutions through open discussions Perform regular code reviews and share results with colleagues Your team You’ll be working within the Group Chief Technology Organization. We provide Engineering services to all business divisions of the UBS group. The team partners with different divisions and functions across the Bank to develop innovative digital solutions and expand our technical expertise into new areas. As an experienced full stack developer, you'll play an important role in building group-wide web services that help build a robust world class data distribution platform. Your expertise Proven track record of Fullstack development using in Java, Spring-boot, JPA and React Excellent understanding and hands-on experience of Core Java, Spring, Spring Boot and Microservices Must have a good understanding of Cloud Native microservice architecture, database concept, Cloud Fundamentals and Gitlab Hands on experience in web development (React, Angular, EXT JS) Experience with Relational Database (Postgresql) Unit testing framework (i.e. Junit) Proven delivery experience in Kafka ecosystem with cluster/broker implementation and topic producer/consumer performance improvement experience. Cloud implementation and deployment experience is a plus About Us UBS is the world’s largest and the only truly global wealth manager. We operate through four business divisions: Global Wealth Management, Personal & Corporate Banking, Asset Management and the Investment Bank. Our global reach and the breadth of our expertise set us apart from our competitors. We have a presence in all major financial centers in more than 50 countries. How We Hire We may request you to complete one or more assessments during the application process. Learn more Join us At UBS, we know that it's our people, with their diverse skills, experiences and backgrounds, who drive our ongoing success. We’re dedicated to our craft and passionate about putting our people first, with new challenges, a supportive team, opportunities to grow and flexible working options when possible. Our inclusive culture brings out the best in our employees, wherever they are on their career journey. We also recognize that great work is never done alone. That’s why collaboration is at the heart of everything we do. Because together, we’re more than ourselves. We’re committed to disability inclusion and if you need reasonable accommodation/adjustments throughout our recruitment process, you can always contact us. Disclaimer / Policy Statements UBS is an Equal Opportunity Employer. We respect and seek to empower each individual and support the diverse cultures, perspectives, skills and experiences within our workforce.
Posted 1 week ago
3.0 years
0 Lacs
Guwahati, Assam, India
On-site
We are seeking a highly skilled Software Engineer with strong Python expertise and a solid understanding of data engineering principles to join our team. The ideal candidate will work on developing and optimizing scalable applications and data workflows, integrating diverse data sources, and supporting the development of data-driven products. This role requires hands-on experience in software development, data modeling, ETL/ELT pipelines, APIs, and cloud-based data systems. You will collaborate closely with product, data, and engineering teams to build high-quality, maintainable, and efficient solutions that support analytics, machine learning, and business intelligence initiatives. Roles and Responsibilities Software Development Design, develop, and maintain Python-based applications, APIs, and microservices with a strong focus on performance, scalability, and reliability. Write clean, modular, and testable code following best software engineering practices. Participate in code reviews, debugging, and optimization of existing applications. Integrate third-party APIs and services as required for application features or data ingestion. Data Engineering Build and optimize data pipelines (ETL/ELT) for ingesting, transforming, and storing structured and unstructured data. Work with relational and non-relational databases, ensuring efficient query performance and data integrity. Collaborate with the analytics and ML teams to ensure data availability, quality, and accessibility for downstream use cases. Implement data modeling, schema design, and version control for data pipelines. Cloud & Infrastructure Deploy and manage solutions on cloud platforms (AWS/Azure/GCP) using services such as S3, Lambda, Glue, BigQuery, or Snowflake. Implement CI/CD pipelines and participate in DevOps practices for automated testing and deployment. Monitor and optimize application and data pipeline performance using observability tools. Collaboration & Strategy Work cross-functionally with software engineers, data scientists, analysts, and product managers to understand requirements and translate them into technical solutions. Provide technical guidance and mentorship to junior developers and data engineers as needed. Document architecture, code, and processes to ensure maintainability and knowledge sharing. Required Skills Bachelor’s/Master’s degree in Computer Science, Engineering, or a related field. 3+ years of experience in Python software development. Strong knowledge of data structures, algorithms, and object-oriented programming. Hands-on experience in building data pipelines (Airflow, Luigi, Prefect, or custom ETL frameworks). Proficiency with SQL and database systems (PostgreSQL, MySQL, MongoDB, etc.). Experience with cloud services (AWS/GCP/Azure) and containerization (Docker, Kubernetes). Familiarity with message queues/streaming platforms (Kafka, Kinesis, RabbitMQ) is a plus. Strong understanding of APIs, RESTful services, and microservice architectures. Knowledge of CI/CD pipelines, Git, and testing frameworks (PyTest, UnitTest). APPLY THROUGH THIS LINK Application link- https://forms.gle/WedXcaM6obARcLQS6
Posted 1 week ago
0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Overview Leading AI-driven Global Supply Chain Solutions Software Product Company and one of Glassdoor’s “Best Places to Work” Seeking an astute individual that has a strong technical foundation with the additional ability to be hands-on with the broader engineering team as part of the development/deployment cycle, and deep knowledge of industry best practices, Data Science and Machine Learning experience with the ability to implement them working with both the platform, and the product teams. Scope Our machine learning platform ingests data in real time, processes information from millions of retail items to serve deep learning models and produces billions of predictions on a daily basis. Blue Yonder Data Science and Machine Learning team works closely with sales, product and engineering teams to design and implement the next generation of retail solutions. Data Science team members are tasked with turning both small, sparse and massive data into actionable insights with measurable improvements to the customer bottom line. Our Current Technical Environment Software: Python 3.* Frameworks/Others: TensorFlow, PyTorch, BigQuery/Snowflake, Apache Beam, Kubeflow, Apache Flink/Dataflow, Kubernetes, Kafka, Pub/Sub, TFX, Apache Spark, and Flask. Application Architecture: Scalable, Resilient, Reactive, event driven, secure multi-tenant Microservices architecture. Cloud: Azure What We Are Looking For Bachelor’s Degree in Computer Science or related fields; graduate degree preferred. Solid understanding of data science and deep learning foundations. Proficient in Python programming with a solid understanding of data structures. Experience working with most of the following frameworks and libraries: Pandas, NumPy, Keras, TensorFlow, Jupyter, Matplotlib etc. Expertise in any database query language, SQL preferred. Familiarity with Big Data tech such as Snowflake , Apache Beam/Spark/Flink, and Databricks. etc. Solid experience with any of the major cloud platforms, preferably Azure and/or GCP (Google Cloud Platform). Reasonable knowledge of modern software development tools, and respective best practices, such as Git, Jenkins, Docker, Jira, etc. Familiarity with deep learning, NLP, reinforcement learning, combinatorial optimization etc. Provable experience guiding junior data scientists in official or unofficial setting. Desired knowledge of Kafka, Redis, Cassandra, etc. What You Will Do As a Senior Data Scientist, you serve as a specialist in the team that supports the team with following responsibilities. Independently, or alongside junior scientists, implement machine learning models by Procuring data from platform, client, and public data sources. Implementing data enrichment and cleansing routines Implementing features, preparing modelling data sets, feature selection, etc. Evaluating candidate models, selecting, and reporting on test performance of final one Ensuring proper runtime deployment of models, and Implementing runtime monitoring of model inputs and performance in order to ensure continued model stability. Work with product, sales and engineering teams helping shape up the final solution. Use data to understand patterns, come up with and test hypothesis; iterate. Help prepare sales materials, estimate hardware requirements, etc. Attend client meetings, online and onsite, to discuss new and current functionality Our Values If you want to know the heart of a company, take a look at their values. Ours unite us. They are what drive our success – and the success of our customers. Does your heart beat like ours? Find out here: Core Values All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, disability or protected veteran status.
Posted 1 week ago
0 years
3 - 6 Lacs
Chennai
On-site
LTTS India Chennai Job Description You should definitely have: Bachelor's degree in computer science, computer engineering, or related technologies. Seven years of experience in systems engineering within the networking industry. Expertise in Linux deployment, scripting and configuration. Expertise in TCP/IP communications stacks and optimizations Experience with ELK (Elasticsearch, Logstash, Kibana), Grafana data streaming (e.g., Kafka), and software visualization. Experience in analyzing and debugging code defects in the Production Environment. Proficiency in version control systems such as GIT. Ability to design comprehensive test scenarios for systems usability, execute tests, and prepare detailed reports on effectiveness and defects for production teams. Full-cycle Systems Engineering experience covering Requirements capture, architecture, design, development, and system testing. Demonstrated ability to work independently and collaboratively within cross-functional teams. Proficient in installing, configuring, debugging, and interpreting performance analytics to monitor, aggregate, and visualize key performance indicators over time. Proven track record of directly interfacing with customers to address concerns and resolve issues effectively. Strong problem-solving skills, capable of driving resolutions autonomously without senior engineer support. Experience in configuring MySQL and PostgreSQL, including setup of replication, troubleshooting, and performance improvement. Proficiency in networking concepts such as network architecture, protocols (TCP/IP, UDP), routing, VLANs, essential for deploying new system servers effectively. Proficiency in scripting language Shell/Bash, in Linux systems. Proficient in utilizing, modifying, troubleshooting, and updating Python scripts and tools to refine code. Excellent written and verbal communication skills. Ability to document processes, procedures, and system configurations effectively. Ability to Handle Stress and Maintain Quality. This includes resilience to effectively manage stress and pressure, as well as a demonstrated ability to make informed decisions, particularly in high-pressure situations. Excellent written and verbal communication skills. It includes the ability to document processes, procedures, and system configurations effectively. It is required for this role to be on-call 24/7 to address service-affecting issues in production. It is required to work during the business hours of Chicago, aligning with local time for effective coordination and responsiveness to business operations and stakeholders in the region. It would be nice if you had: Solid software development experience in the Python programming language, with the ability to understand, execute, and debug issues, as well as develop new tools using Python. Experience in design, architecture, traffic flows, configuration, debugging, and deploying Deep Packet Inspection (DPI) systems. Proficient in managing and configuring AAA systems (Authentication, Authorization, and Accounting). Job Requirement ELK, Grafana
Posted 1 week ago
25.0 years
0 - 0 Lacs
Chennai
On-site
The Company PayPal has been revolutionizing commerce globally for more than 25 years. Creating innovative experiences that make moving money, selling, and shopping simple, personalized, and secure, PayPal empowers consumers and businesses in approximately 200 markets to join and thrive in the global economy. We operate a global, two-sided network at scale that connects hundreds of millions of merchants and consumers. We help merchants and consumers connect, transact, and complete payments, whether they are online or in person. PayPal is more than a connection to third-party payment networks. We provide proprietary payment solutions accepted by merchants that enable the completion of payments on our platform on behalf of our customers. We offer our customers the flexibility to use their accounts to purchase and receive payments for goods and services, as well as the ability to transfer and withdraw funds. We enable consumers to exchange funds more safely with merchants using a variety of funding sources, which may include a bank account, a PayPal or Venmo account balance, PayPal and Venmo branded credit products, a credit card, a debit card, certain cryptocurrencies, or other stored value products such as gift cards, and eligible credit card rewards. Our PayPal, Venmo, and Xoom products also make it safer and simpler for friends and family to transfer funds to each other. We offer merchants an end-to-end payments solution that provides authorization and settlement capabilities, as well as instant access to funds and payouts. We also help merchants connect with their customers, process exchanges and returns, and manage risk. We enable consumers to engage in cross-border shopping and merchants to extend their global reach while reducing the complexity and friction involved in enabling cross-border trade. Our beliefs are the foundation for how we conduct business every day. We live each day guided by our core values of Inclusion, Innovation, Collaboration, and Wellness. Together, our values ensure that we work together as one global team with our customers at the center of everything we do – and they push us to ensure we take care of ourselves, each other, and our communities. Job Summary: What you need to know about the role- As a member of the Observability Platform team, you will be responsible for the development and delivery of the applications and services that power PayPal’s Enterprise Observability platform. You will work closely with product, design, and developments teams to understand what their observability needs are. We're looking for talented, motivated, and detail-oriented technologists with a passion for building robust systems at scale. We value collaboration, communication, and a passion for achieving engineering and product excellence. Meet our team: The Observability Team at PayPal is responsible for providing world-class platform that can collect, ingest, store, alert and visualize data from many different sources in PayPal – like application logs, infrastructure, Virtual Machines, Containers, Network, Load Balancers etc. The platform should provide functionalities that enables different teams in PayPal to gain business insights and debugging/triaging of issues in an easy-to-use and intuitive and self-service manner. The platform should be scalable to support the data needs for PayPal (fortune 500 company); be highly available at 99.9% or higher; be reliable and fault-tolerant across the different physical data centers and thousands for micro services. You’ll work alongside the best and the brightest engineering talent in the industry. You need to be dynamic, collaborative, and curious as we build new experiences and improvise the Observability platform running at a scale few companies can match. Job Description: Your way to impact: As an engineer in our development team, you will be responsible for developing the next gen PayPal's Observability platform, support the long-term reliability and scalability of the system and will be involved in implementations that avoid/minimize the day-to-day support work to keep the systems up and running. If you are passionate about application development, systems design, scaling beyond 99.999% reliability and working in a highly dynamic environment with a team of smart and talented engineers then this is the job for you.You will work closely with product and experience and/or development teams to understand the developer needs around observability and deliver the functions that meets their needs. The possibilities are unlimited for disruptive thinking, and you will have an opportunity to be a part of making history in the niche Observability area. Your Day to Day: As an Software Engineer - Backend you'll contribute to building robust backend systems. You'll collaborate closely with experienced engineers to learn and grow your skills. Develop and maintain backend components. Write clean, efficient code adhering to coding standards. Participate in code reviews and provide feedback. What do you need to Bring 2+ years of backend development experience and a bachelor’s degree in computer science or related field. Strong foundation in programming concepts and data structures. Proficiency in at least one backend language (Java, Python, Ruby on Rails) Proficiency in back-end development utilizing Java EE technologies (Java, application servers, servlet containers, JMS, JPA, Spring MVC, Hibernate) Strong understanding of web services and Service-Oriented Architecture (SOA) standards, including REST, OAuth, and JSON, with experience in Java environments. Experience with ORM (Object-Relational Mapper) tools, working within Java-based solutions like Hibernate. Experience with databases (SQL, NoSQL) Preferred Qualification: Experience with "Observability Pillars - Logs / Metrics / Traces" , Data Streaming Pipelines, Google Dataflow and Kafka Subsidiary: PayPal Travel Percent: 0 PayPal does not charge candidates any fees for courses, applications, resume reviews, interviews, background checks, or onboarding. Any such request is a red flag and likely part of a scam. To learn more about how to identify and avoid recruitment fraud please visit https://careers.pypl.com/contact-us . For the majority of employees, PayPal's balanced hybrid work model offers 3 days in the office for effective in-person collaboration and 2 days at your choice of either the PayPal office or your home workspace, ensuring that you equally have the benefits and conveniences of both locations. Our Benefits: At PayPal, we’re committed to building an equitable and inclusive global economy. And we can’t do this without our most important asset—you. That’s why we offer benefits to help you thrive in every stage of life. We champion your financial, physical, and mental health by offering valuable benefits and resources to help you care for the whole you. We have great benefits including a flexible work environment, employee shares options, health and life insurance and more. To learn more about our benefits please visit https://www.paypalbenefits.com . Who We Are: Commitment to Diversity and Inclusion PayPal provides equal employment opportunity (EEO) to all persons regardless of age, color, national origin, citizenship status, physical or mental disability, race, religion, creed, gender, sex, pregnancy, sexual orientation, gender identity and/or expression, genetic information, marital status, status with regard to public assistance, veteran status, or any other characteristic protected by federal, state, or local law. In addition, PayPal will provide reasonable accommodations for qualified individuals with disabilities. If you are unable to submit an application because of incompatible assistive technology or a disability, please contact us at paypalglobaltalentacquisition@paypal.com . Belonging at PayPal: Our employees are central to advancing our mission, and we strive to create an environment where everyone can do their best work with a sense of purpose and belonging. Belonging at PayPal means creating a workplace with a sense of acceptance and security where all employees feel included and valued. We are proud to have a diverse workforce reflective of the merchants, consumers, and communities that we serve, and we continue to take tangible actions to cultivate inclusivity and belonging at PayPal. Any general requests for consideration of your skills, please Join our Talent Community . We know the confidence gap and imposter syndrome can get in the way of meeting spectacular candidates. Please don’t hesitate to apply.
Posted 1 week ago
0 years
4 - 7 Lacs
Noida
On-site
Company Description Daxko powers wellness to improve lives. Every day our team members focus their passion and expertise in helping health & wellness facilities operate efficiently and engage their members. Whether a neighborhood yoga studio, a national franchise with locations in every city, a YMCA or JCC-and every type of organization in between-we build solutions that make every aspect of running and being a member of a health and wellness organization easier and delightful. Job Description The Senior Engineer I is responsible for developing high quality applications and writing code on a daily basis. This includes heavy collaboration with product managers, architects and other software engineers to build best-in-class software using modern technologies and an agile development process. The Senior Software Engineer focuses on the continued growth of their team and team members. The Senior Software Engineer reports to the Manager, Engineering/Development. You Will also : Be Responsible for defining design patterns and identifying frameworks used in the engineering team’s solutions development work Be Responsible for establishing and guiding the engineering team’s development course Develop high quality applications that provide a delightful user experience and meet business expectations Develop clean, reusable, well-structured and maintainable code following best practices and industry standards Develop elegant, responsive, high performance, cross-platform solutions Develop, debug, and modify components of software applications and tools Write automated unit, integration and acceptance tests as appropriate to support our continuous integration pipelines Support and troubleshoot data and/or system issues as needed Responsible for provding actionable feedback in code reviews Capable of leading system architecture and design reviews Participate in user story creation in collaboration with the team Guide team members to develop prototypes as necessary and validate ideas with a data driven approach Mentor team members in all aspects of the software development process No Travel Required No Budget Responsibilities Qualifications Bachelor’s degree in related field such as Computer Science, Computer Engineer, Applied Mathematics, or Applied Sciences OR equivalent experience Five (5+) years of Software Engineering or other relevant experience Proficient in application development in modern object-oriented programming languages Five (5+) years of experience developing mobile applications in React Native Proficient in building and integrating with web services and RESTful APIs Proficient in SQL or other relational data storage technologies Experience in automated testing practices including unit testing, integration testing, and/or performance testing Experience using code versioning tools such as Git Experience with Agile development methodology Understanding of modern cloud architecture and tools Preferred Education and Experience: Bachelor’s degree or higher (or equivalent) in related field such as Computer Science, Computer Engineer, Applied Mathematics, or Applied Sciences Seven (7+) years of Software Engineering or other relevant experience Experience developing web applications with React Experience with NodeJS and TypeScript Experience with dependency injection frameworks Experience working with Microservices Architecture Experience using Virtualized hosting and delivery (Docker, Kubernetes) Experience working with Realtime Data Streaming (e.g. Kafka, Kinesis) Experience with NoSQL/Non-relational Databases Experience with defining strategies used in an engineering team’s solutions development work Understanding of Serverless Computing (e.g. AWS cloud services) Understanding of AWS Messaging Services (e.g. SNS & SQS) Understanding of DevOps and CI/CD tools (e.g. GitLab CI / Jenkins / Bamboo) Understanding of frontend engineer workflow and build tools such as npm, webpack, etc Additional Information #LI-Hybrid Daxko is dedicated to pursuing and hiring a diverse workforce. We are committed to diversity in the broadest sense, including thought and perspective, age, ability, nationality, ethnicity, orientation, and gender. The skills, perspectives, ideas, and experiences of all of our team members contribute to the vitality and success of our purpose and values. We truly care for our team members, and this is reflected through our offices, and benefits, and great perks. These perks are only for our full-time team members. Some of our favorites include: Hybrid work model Leave entitlements Recently introduced hospitalization/caregiving leaves Paid parental leaves (Maternity, Paternity, & Adoption) ️Group Health Insurance Accidental Insurance Tax-saving reimbursements Provident Fund (PF) Casual work environments Company Events and Celebrations Performance achievement awards Referral bonus Learning & Development opportunities Daxko is dedicated to pursuing and hiring a diverse workforce. We are committed to diversity in the broadest sense, including thought and perspective, age, ability, nationality, ethnicity, orientation, and gender. The skills, perspectives, ideas, and experiences of all of our team members contribute to the vitality and success of our purpose and values. We truly care for our team members, and this is reflected through our offices, and benefits, and great perks. These perks are only for our full-time team members. Some of our favorites include: Hybrid work model Leave entitlements Recently introduced hospitalization/caregiving leaves Paid parental leaves (Maternity, Paternity, & Adoption) ️Group Health Insurance Accidental Insurance Tax-saving reimbursements Provident Fund (PF) Casual work environments Company Events and Celebrations Performance achievement awards Referral bonus Learning & Development opportunities
Posted 1 week ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
39817 Jobs | Dublin
Wipro
19388 Jobs | Bengaluru
Accenture in India
15459 Jobs | Dublin 2
EY
14907 Jobs | London
Uplers
11185 Jobs | Ahmedabad
Amazon
10459 Jobs | Seattle,WA
IBM
9256 Jobs | Armonk
Oracle
9226 Jobs | Redwood City
Accenture services Pvt Ltd
7971 Jobs |
Capgemini
7704 Jobs | Paris,France