Jobs
Interviews

10569 Apache Jobs - Page 2

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

3.0 years

15 - 22 Lacs

Ghaziabad, Uttar Pradesh, India

Remote

Experience : 3.00 + years Salary : INR 1500000-2200000 / year (based on experience) Expected Notice Period : 30 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: LINEN.Cloud) (*Note: This is a requirement for one of Uplers' client - LINEN.Cloud) What do you need for this opportunity? Must have skills required: Cloud Foundry, Java Web Services, Kafka., RabbitMQ, Spring Boot, Docker, JavaScript, Kubernetes LINEN.Cloud is Looking for: Java Developer Function: Technical Management → Engineering Management, Software Engineering → Backend Development, Full-Stack Development Java, Angular. Microservices, React.js, SQL We are looking for highly skilled developers with experience building web applications and REST APIs in Java. You will collaborate with cross-functional teams to translate business requirements into high-quality, scalable, and maintainable code. The ideal candidate should have a strong foundation in Java development, along with excellent problem-solving skills and a passion for building innovative solutions. Responsibilities: Designing, implementing, and Unit testing Java applications. Aligning application design with business goals. Debugging and resolving technical problems that arise. Recommending changes to the existing Java infrastructure. Ensuring continuous professional self-development. Requirements: Experience developing and testing Java Web Services RESTful (primary), XML, JSON, and supporting integration and enabling access via API calls. Experience with Tomcat, Apache, and similar web server technologies. Hands-on experience working with RabbitMQ and Kafka. Experience with the Spring Boot framework. Hands-on with Angular/Node.js is preferred. Working knowledge of ELK (Elasticsearch, Logstash, and Kibana) or Solr is a big plus. Experience with virtualization like Cloud Foundry (PCF), Kubernetes (PKS), Docker, etc, is a big plus. Agile/Scrum expertise. Experience establishing and enforcing branching and software development processes and deployment via CI/CD. Competencies: Aligning application design with business goals. Debugging and resolving technical problems that arise. Recommending changes to the existing Java infrastructure. Ensuring continuous professional self-development. Team spirit and strong communication skills Customer- and service-oriented, confident appearance in an international environment Very high proficiency in English. How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!

Posted 1 day ago

Apply

12.0 years

0 Lacs

Gurugram, Haryana, India

On-site

About The Role OSTTRA India The Role: Technical Architect The Team: The OSTTRA Technology team is composed of Capital Markets Technology professionals, who build, support and protect the applications that operate our network. The technology landscape includes high-performance, high-volume applications as well as compute intensive applications, leveraging contemporary microservices, cloud-based architectures. The Impact: Together, we build, support, protect and manage high-performance, resilient platforms that process more than 100 million messages a day. Our services are vital to automated trade processing around the globe, managing peak volumes and working with our customers and regulators to ensure the efficient settlement of trades and effective operation of global capital markets. What’s in it for you: The current objective is to identify individuals with 12+ years of experience who have high expertise, to join their existing team of experts who are spread across the world. This is your opportunity to start at the beginning and get the advantages of rapid early growth. This role is based out in Gurgaon and expected to work with different teams and colleagues across the globe. Responsibilities The role shall be responsible for establishing, maintaining, socialising, and realising the target state of Product Architecture for Post trade businesses of Osttra. This shall encompass all services that Osttra offers for these businesses and all the systems which enable those services. Looking for a person who is high on energy and motivation. Should feel challenged by difficult problems. The role shall partner with portfolio delivery leads, programme managers, portfolio business leads and horizontal technical architects to frame the strategy, to provide solutions for planned programmes and to guide the roadmaps. He/she shall able to build high level Design and log-level techicnal solutions, considerting factors such as scalablity, performance, security, maintanlibity and cost-effectiveness The role shall own the technical and architectural decisions for the projects & products. He / she shall review the designs and own the design quality. They will ensure that there is a robust code / implementation review practice in the product. Likewise, they shall be responsible for the robust CI / CD and robust DevSecOps engineering pipelines being used in the projects. He / she shall provide the ongoing support on design and architecture problems to the delivery teams The role shall manage the tech debt log and plan for their remediation across deliveries and roadmaps. The role shall maintain the living Architecture Reference Documents for the Products. They shall actively partner with Horizontal Technical Architects to factor tech constructs within their portfolios and also to ensure the vibrant feedback to the technical strategies. They shall be responsible for guiding the L3 / L2 teams when needed in the resolution of the production situations and incidents. They shall be responsible for various define guidelines and system design for DR strategies and BCP plan for the proudcts. They shall be responsible for architecting key mission critical systems components, review designs and help uplift He/ She should performs critical technical review of changes on app or infra on system. The role shall enable an ecosystem such that the functional API, message, data and flow models within the products of the portfolio are well documented. And shall also provide the strong governance / oversight of the same What We’re Looking For Rich domain experience of financial services industry preferably with financial markets within Pre/post trade life cycles or large-scale Buy / Sell / Brokerage organisations Should have experience architecture design for the muitple products and of large-scale change programmes. Should be adept with application development and engineering methods and tools. Should have robust experience with micro services applications and services development and integration. Should be adept with development tools, contemporary runtime, and observability stacks for micro services. Should have experience of modelling for APIs, Messages and may be data. Should have experience of complex migration, which include data migration Should have experience in architecture & design of highly resilient, high availability, high volume applications. Should be able to initiates or contributes to initiatives around reliability & resilience of application Rich experience of architectural patterns like MVC based front end applications, API & Event driven architectures, Event streaming, Message processing/orchestrations, CQRS and possibly Event sourcing etc. Experience of protocols or integration technologies like HTTP, MQ, FTP, REST/API and possibly FIX/SWIFT etc. Experience of messaging formats and paradigms like XSD, XML, XSLT, JSON, REST and possibly gRPC, GraphQL etc. Experience of technology like Kafka, Spark streams, Kubernetes / EKS, API Gateways, Web & Application servers, message queuing infrastructure, data transformation / ETL tools Experience of languages like Java, python; application development frameworks like Spring Boot/Family, Apache family and common place AWS / other cloud provider services. Experience of engineering methods like CI/CD, build deploy automation, infra as code and unit / integration testing methods and tools. Should have appetite to review / code for complex problems and should find interests / energy in doing design discussions and reviews. Experience of development with NoSQL and Relational databases is required. Should have an active/prior experience with MVC web development or with contemporary React/Angular frameworks. Should have an experice of migrating monolithic application to a cloud based solution with understanding of defning domain based services responsibliity. Should have an rich experience of designing cloud-natvie architecture including microservices, serverless computing, containerization( docker, kubernets ) on relevent platforms ( GCP/AWS) and monitoring aspects. The Location: Gurgaon, India About Company Statement OSTTRA is a market leader in derivatives post-trade processing, bringing innovation, expertise, processes and networks together to solve the post-trade challenges of global financial markets. OSTTRA operates cross-asset post-trade processing networks, providing a proven suite of Credit Risk, Trade Workflow and Optimisation services. Together these solutions streamline post-trade workflows, enabling firms to connect to counterparties and utilities, manage credit risk, reduce operational risk and optimise processing to drive post-trade efficiencies. OSTTRA was formed in 2021 through the combination of four businesses that have been at the heart of post trade evolution and innovation for the last 20+ years: MarkitServ, Traiana, TriOptima and Reset. These businesses have an exemplary track record of developing and supporting critical market infrastructure and bring together an established community of market participants comprising all trading relationships and paradigms, connected using powerful integration and transformation capabilities. About OSTTRA Candidates should note that OSTTRA is an independent firm, jointly owned by S&P Global and CME Group. As part of the joint venture, S&P Global provides recruitment services to OSTTRA - however, successful candidates will be interviewed and directly employed by OSTTRA, joining our global team of more than 1,200 post trade experts. OSTTRA was formed in 2021 through the combination of four businesses that have been at the heart of post trade evolution and innovation for the last 20+ years: MarkitServ, Traiana, TriOptima and Reset. OSTTRA is a joint venture, owned 50/50 by S&P Global and CME Group. With an outstanding track record of developing and supporting critical market infrastructure, our combined network connects thousands of market participants to streamline end to end workflows - from trade capture at the point of execution, through portfolio optimization, to clearing and settlement. Joining the OSTTRA team is a unique opportunity to help build a bold new business with an outstanding heritage in financial technology, playing a central role in supporting global financial markets. Learn more at www.osttra.com. What’s In It For You? Benefits We take care of you, so you can take care of business. We care about our people. That’s why we provide everything you—and your career—need to thrive at S&P Global. Our Benefits Include Health & Wellness: Health care coverage designed for the mind and body. Flexible Downtime: Generous time off helps keep you energized for your time on. Continuous Learning: Access a wealth of resources to grow your career and learn valuable new skills. Invest in Your Future: Secure your financial future through competitive pay, retirement planning, a continuing education program with a company-matched student loan contribution, and financial wellness programs. Family Friendly Perks: It’s not just about you. S&P Global has perks for your partners and little ones, too, with some best-in class benefits for families. Beyond the Basics: From retail discounts to referral incentive awards—small perks can make a big difference. For more information on benefits by country visit: https://spgbenefits.com/benefit-summaries Recruitment Fraud Alert If you receive an email from a spglobalind.com domain or any other regionally based domains, it is a scam and should be reported to reportfraud@spglobal.com. S&P Global never requires any candidate to pay money for job applications, interviews, offer letters, “pre-employment training” or for equipment/delivery of equipment. Stay informed and protect yourself from recruitment fraud by reviewing our guidelines, fraudulent domains, and how to report suspicious activity here. Equal Opportunity Employer S&P Global is an equal opportunity employer and all qualified candidates will receive consideration for employment without regard to race/ethnicity, color, religion, sex, sexual orientation, gender identity, national origin, age, disability, marital status, military veteran status, unemployment status, or any other status protected by law. Only electronic job submissions will be considered for employment. If you need an accommodation during the application process due to a disability, please send an email to: EEO.Compliance@spglobal.com and your request will be forwarded to the appropriate person. US Candidates Only: The EEO is the Law Poster http://www.dol.gov/ofccp/regs/compliance/posters/pdf/eeopost.pdf describes discrimination protections under federal law. Pay Transparency Nondiscrimination Provision - https://www.dol.gov/sites/dolgov/files/ofccp/pdf/pay-transp_%20English_formattedESQA508c.pdf 20 - Professional (EEO-2 Job Categories-United States of America), BSMGMT203 - Entry Professional (EEO Job Group) Job ID: 315820 Posted On: 2025-07-10 Location: Gurgaon, Haryana, India

Posted 1 day ago

Apply

0 years

0 Lacs

Mumbai Metropolitan Region

On-site

Key Responsibilities Designed and developed scalable ETL pipelines using Cloud Functions, Cloud Dataproc (Spark), and BigQuery as the central data warehouse for large-scale batch and transformation workloads. Implemented efficient data modeling techniques in BigQuery (including star/snowflake schemas, partitioning, and clustering) to support high-performance analytics and reduce query costs. Built end-to-end ingestion frameworks leveraging Cloud Pub/Sub and Cloud Functions for real-time and event-driven data capture. Used Apache Airflow (Cloud Composer) for orchestration of complex data workflows and dependency management. Applied Cloud Data Fusion and Datastream selectively for integrating specific sources (e.g., databases and legacy systems) into the pipeline. Developed strong backtracking and troubleshooting workflows to quickly identify data issues, job failures, and pipeline bottlenecks, ensuring consistent data delivery and SLA compliance. Integrated robust monitoring, alerting, and logging to ensure data quality, integrity, and observability. Tech stack GCP: BigQuery, Cloud Functions, Cloud Dataproc (Spark), Pub/Sub, Data Fusion, Datastream Orchestration: Apache Airflow (Cloud Composer) Languages: Python, SQL, PySpark Concepts: Data Modeling, ETL/ELT, Streaming & Batch Processing, Schema Management, Monitoring & Logging Some of the most important data sources: (need to know ingestion technique on these) CRM Systems (cloud-based and internal) Salesforce Teradata MySQL API Other 3rd-party and internal operational systems Skills: etl/elt,cloud data fusion,schema management,sql,pyspark,cloud dataproc (spark),monitoring & logging,data modeling,bigquery,etl,cloud pub/sub,python,gcp,bigquerry,streaming & batch processing,datastream,cloud functions,spark,apache airflow (cloud composer)

Posted 1 day ago

Apply

3.0 years

0 Lacs

India

Remote

We are hiring Codeignitor Back-End Developers to Join Our Team. Send your CV with a Cover Letter to info@clickbydigital.in & clickbydigital@gmail.com or Call on 7482020111 Attachments of previous work details will be a plus point. Work from Home Facility is available Details are mentioned below: Back-End Developer (Knowledge of PHP, Codeignitor, Node.Js is Compulsory) Job brief We are looking for a Back-End Developer to produce scalable software solutions. You’ll be part of a cross-functional team that’s responsible for the full software development life cycle, from conception to deployment. The ideal candidate is a highly resourceful and innovative developer with extensive experience in the layout, design, and coding of Software specifically in PHP, Node.js, Codeigniter. You must also possess a strong knowledge of web application development using Node.js, PHP, Codeignitor, JAVA, JS, C# programming language, and MySQL Server databases. Should be Familiar with CI-CD Deployment, Git. As a Back-End Developer, you should be familiar with both front-end and back-end coding languages, development frameworks, and third-party libraries. You should also be a team player with a knack for visual design and utility. If you’re also familiar with Agile methodologies, we’d like to meet you. Responsibilities · Work with development teams and product managers to ideate software solutions · Design client-side and server-side architecture · Develop and manage well-functioning databases and applications · Write effective APIs in Codeignitor 4 · Test software to ensure responsiveness and efficiency · Troubleshoot, debug, and upgrade software · Create security and data protection settings · Build features and applications with a mobile responsive design · Write technical documentation · Work with data scientists and analysts to improve software Requirements · Proven experience as a Back-End Developer (with a minimum of 3 Years of Working) or similar role · Experience developing desktop and mobile applications · Familiarity with common stacks · Knowledge of multiple front-end languages and libraries (e.g., HTML/ CSS, JavaScript, XML, jQuery) · Knowledge of multiple back-end languages (e.g., C#, Java), PHP Framework (CI 4) and JavaScript frameworks (e.g., Angular, React, Node.js) · Familiarity with databases (e.g., MySQL, MongoDB), web servers (e.g., Apache), and UI/UX design · Excellent communication and teamwork skills · Great attention to detail · Organizational skills · An analytical mind · Degree in Computer Science, Statistics or relevant field Salary Range Rs 4.0 Lacs to 5.5 Lacs In Hand per anum Cheers, ClickByDigital team #hiring #workfromhome #whm #CIdeveloper #Codeignitor #backenddeveloper

Posted 1 day ago

Apply

2.0 - 5.0 years

0 Lacs

Mohali district, India

Remote

Job Description: SDE-II – Python Developer Job Title SDE-II – Python Developer Department Operations Location In-Office Employment Type Full-Time Job Summary We are looking for an experienced Python Developer to join our dynamic development team. The ideal candidate will have 2 to 5 years of experience in building scalable backend applications and APIs using modern Python frameworks. This role requires a strong foundation in object-oriented programming, web technologies, and collaborative software development. You will work closely with the design, frontend, and DevOps teams to deliver robust and high-performance solutions. Key Responsibilities • Develop, test, and maintain backend applications using Django, Flask, or FastAPI. • Build RESTful APIs and integrate third-party services to enhance platform capabilities. • Utilize data handling libraries like Pandas and NumPy for efficient data processing. • Write clean, maintainable, and well-documented code that adheres to industry best practices. • Participate in code reviews and mentor junior developers. • Collaborate in Agile teams using Scrum or Kanban workflows. • Troubleshoot and debug production issues with a proactive and analytical approach. Required Qualifications • 2 to 5 years of experience in backend development with Python. • Proficiency in core and advanced Python concepts, including OOP and asynchronous programming. • Strong command over at least one Python framework (Django, Flask, or FastAPI). • Experience with data libraries like Pandas and NumPy. • Understanding of authentication/authorization mechanisms, middleware, and dependency injection. • Familiarity with version control systems like Git. • Comfortable working in Linux environments. Must-Have Skills • Expertise in backend Python development and web frameworks. • Strong debugging, problem-solving, and optimization skills. • Experience with API development and microservices architecture. • Deep understanding of software design principles and security best practices. Good-to-Have Skills • Experience with Generative AI frameworks (e.g., LangChain, Transformers, OpenAI APIs). • Exposure to Machine Learning libraries (e.g., Scikit-learn, TensorFlow, PyTorch). • Knowledge of containerization tools (Docker, Kubernetes). • Familiarity with web servers (e.g., Apache, Nginx) and deployment architectures. • Understanding of asynchronous programming and task queues (e.g., Celery, AsyncIO). • Familiarity with Agile practices and tools like Jira or Trello. • Exposure to CI/CD pipelines and cloud platforms (AWS, GCP, Azure). Company Overview We specialize in delivering cutting-edge solutions in custom software, web, and AI development. Our work culture is a unique blend of in-office and remote collaboration, prioritizing our employees above everything else. At our company, you’ll find an environment where continuous learning, leadership opportunities, and mutual respect thrive. We are proud to foster a culture where individuals are valued, encouraged to evolve, and supported in achieving their fullest potential. Benefits and Perks • Competitive Salary: Earn up to ₹6 –10 LPA based on skills and experience. • Generous Time Off: Benefit from 18 annual holidays to maintain a healthy work-life balance. • Continuous Learning: Access extensive learning opportunities while working on cutting-edge projects. • Client Exposure: Gain valuable experience in client-facing roles to enhance your professional growth.

Posted 1 day ago

Apply

10.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Role: Java Lead (Crew Management) Location: Hyderabad/Pune/Gr. Noida/Gurugram Experience: 10+ years Skills: Mandatory: Java/J2EE Spring, Spring Boot, Git/SVN, Hibernate, Microservices, RESTful/SOAP APIs, Apache/Confluent Kafka knowledge Mandatory: Domain knowledge in Airline Operations / Airport Operations / Flight Operations / Crew Management Good to have: Node & React Framework, AWS/Azure, EKS, NewRelic Minimum : 10+ years tech experience; 3 yrs. of hands-on tech design lead and programming experience. Collaborate with business users to understand and develop business requirements; analyze and translate business requirements into functional and non-functional technical requirements Evaluate and develop accurate timelines for tasks and projects based on complexity, resources, competing priorities and time required to complete. Identify testing scenarios and ensure they are covered by automated or manual test plans Design, develop, test, and deploy new and existing applications and configurations to satisfy defined requirements; ensure compliance with all testing, documentation, and change management requirements Develop and utilize quality control measures such as code reviews, automated and manual testing, and debugging procedures Develop and adhere to development standards that allow for the maintainability and testability of code in a manner that supports team development Troubleshoot technical issues, identify the cause, determine possible resolutions, and remediate issues in existing applications; analyze application performance and take actions to correct deficiencies. Familiar with the application development/support in cloud environment. Excellent communication skills and capable of managing work and time based on prioritized workload and deliver commitment in timely manner. Readiness to work in both development and support environment. Readiness to provide on-call support based on rostered days.

Posted 1 day ago

Apply

25.0 years

0 Lacs

India

On-site

We are looking for a detail-oriented and experienced Python Web Scraping Developer to join our team. You will be responsible for building and maintaining efficient, scalable web scraping systems to extract structured data from dynamic and static web sources. The ideal candidate will have hands-on experience with scraping frameworks, handling anti-bot mechanisms, and working with large datasets. Key Responsibilities: Web Crawling & Scraping Design, build, and maintain web crawlers using Python-based frameworks like Scrapy , BeautifulSoup , or Selenium . Develop robust and scalable scripts to extract data from structured and unstructured web sources (HTML, JavaScript-rendered pages, APIs). Handle dynamic websites using headless browsers or asynchronous scraping techniques. Data Processing & Storage Clean, normalize, and structure scraped data for downstream use (e.g., in databases, CSV/JSON, or APIs). Store data in databases such as MongoDB , PostgreSQL , MySQL , or cloud storage (e.g., AWS S3, GCP). Anti-bot Handling Implement IP rotation, user-agent spoofing, and CAPTCHA solving mechanisms using tools like Proxies , Tor , 2Captcha , or Puppeteer . Monitor scraping health and proactively manage bans or blocks. Automation & Scheduling Automate scraping tasks with cron jobs , Celery , or workflow orchestrators (e.g., Apache Airflow). Build dashboards or log reports to track scraping success and failure rates. Collaboration & Documentation Work closely with data analysts, engineers, and business stakeholders to understand scraping requirements. Document code, pipelines, and site-specific scraping logic. Key Qualifications: Bachelors degree in Computer Science, Engineering, or related field. 25 years of experience in web scraping or automation using Python. Proficiency in Scrapy , BeautifulSoup , Selenium , Playwright , or Requests/HTTPX . Strong understanding of HTML, DOM, CSS selectors, JavaScript, and REST APIs. Experience handling scraping for websites with anti-bot or JavaScript-rendered content. Familiarity with Linux environments , Git , and CI/CD pipelines .

Posted 1 day ago

Apply

5.0 - 7.0 years

0 Lacs

Trivandrum, Kerala, India

On-site

You are passionate about quality and how customers experience the products you test. You have the ability to create, maintain and execute test plans in order to verify requirements. As a Quality Engineer at Equifax, you will be a catalyst in both the development and the testing of high priority initiatives. You will develop and test new products to support technology operations while maintaining exemplary standards. As a collaborative member of the team, you will deliver QA services (code quality, testing services, performance engineering, development collaboration and continuous integration). You will conduct quality control tests in order to ensure full compliance with specified standards and end user requirements. You will execute tests using established plans and scripts; documents problems in an issues log and retest to ensure problems are resolved. You will create test files to thoroughly test program logic and verify system flow. You will identify, recommend and implement changes to enhance effectiveness of QA strategies. What You Will Do Independently develop scalable and reliable automated tests and frameworks for testing software solutions. Specify and automate test scenarios and test data for a highly complex business by analyzing integration points, data flows, personas, authorization schemes and environments Develop regression suites, develop automation scenarios, and move automation to an agile continuous testing model. Pro-actively and collaboratively taking part in all testing related activities while establishing partnerships with key stakeholders in Product, Development/Engineering, and Technology Operations. What Experience You Need Bachelor's degree in a STEM major or equivalent experience 5-7 years of software testing experience Able to create and review test automation according to specifications Ability to write, debug, and troubleshoot code in Java, Springboot, TypeScript/JavaScript, HTML, CSS Creation and use of big data processing solutions using Dataflow/Apache Beam, Bigtable, BigQuery, PubSub, GCS, Composer/Airflow, and others with respect to software validation Created test strategies and plans Led complex testing efforts or projects Participated in Sprint Planning as the Test Lead Collaborated with Product Owners, SREs, Technical Architects to define testing strategies and plans. Design and development of micro services using Java, Springboot, GCP SDKs, GKE/Kubeneties Deploy and release software using Jenkins CI/CD pipelines, understand infrastructure-as-code concepts, Helm Charts, and Terraform constructs Cloud Certification Strongly Preferred What Could Set You Apart An ability to demonstrate successful performance of our Success Profile skills, including: Attention to Detail - Define test case candidates for automation that are outside of product specifications. i.e. Negative Testing; Create thorough and accurate documentation of all work including status updates to summarize project highlights; validating that processes operate properly and conform to standards Automation - Automate defined test cases and test suites per project Collaboration - Collaborate with Product Owners and development team to plan and and assist with user acceptance testing; Collaborate with product owners, development leads and architects on functional and non-functional test strategies and plans Execution - Develop scalable and reliable automated tests; Develop performance testing scripts to assure products are adhering to the documented SLO/SLI/SLAs; Specify the need for Test Data types for automated testing; Create automated tests and tests data for projects; Develop automated regression suites; Integrate automated regression tests into the CI/CD pipeline; Work with teams on E2E testing strategies and plans against multiple product integration points Quality Control - Perform defect analysis, in-depth technical root cause analysis, identifying trends and recommendations to resolve complex functional issues and process improvements; Analyzes results of functional and non-functional tests and make recommendation for improvements; Performance / Resilience: Understanding application and network architecture as inputs to create performance and resilience test strategies and plans for each product and platform. Conducting the performance and resilience testing to ensure the products meet SLAs / SLOs Quality Focus - Review test cases for complete functional coverage; Review quality section of Production Readiness Review for completeness; Recommend changes to existing testing methodologies for effectiveness and efficiency of product validation; Ensure communications are thorough and accurate for all work documentation including status and project updates Risk Mitigation - Work with Product Owners, QE and development team leads to track and determine prioritization of defects fixes

Posted 1 day ago

Apply

5.0 years

0 Lacs

Trivandrum, Kerala, India

On-site

What you’ll do? Design, develop, and operate high scale applications across the full engineering stack. Design, develop, test, deploy, maintain, and improve software. Apply modern software development practices (serverless computing, microservices architecture, CI/CD, infrastructure-as-code, etc.) Work across teams to integrate our systems with existing internal systems, Data Fabric, CSA Toolset. Participate in technology roadmap and architecture discussions to turn business requirements and vision into reality. Participate in a tight-knit, globally distributed engineering team. Triage product or system issues and debug/track/resolve by analyzing the sources of issues and the impact on network, or service operations and quality. Research, create, and develop software applications to extend and improve on Equifax Solutions. Manage sole project priorities, deadlines, and deliverables. Collaborate on scalability issues involving access to data and information. Actively participate in Sprint planning, Sprint Retrospectives, and other team activity What experience you need? Bachelor's degree or equivalent experience 5+ years of software engineering experience 5+ years experience writing, debugging, and troubleshooting code in Java & SQL 2+ years experience with Cloud technology: GCP, AWS, or Azure 2+ years experience designing and developing cloud-native solutions 2+ years experience designing and developing microservices using Java, SpringBoot, GCP SDKs, GKE/Kubernetes 3+ years experience deploying and releasing software using Jenkins CI/CD pipelines, understand infrastructure-as-code concepts, Helm Charts, and Terraform constructs What could set you apart? Knowledge or experience with Apache Beam for stream and batch data processing. Familiarity with big data tools and technologies like Apache Kafka, Hadoop, or Spark. Experience with containerization and orchestration tools (e.g., Docker, Kubernetes). Exposure to data visualization tools or platforms.

Posted 1 day ago

Apply

5.0 years

0 Lacs

Indore, Madhya Pradesh, India

On-site

Project Role : Data Engineer Project Role Description : Design, develop and maintain data solutions for data generation, collection, and processing. Create data pipelines, ensure data quality, and implement ETL (extract, transform and load) processes to migrate and deploy data across systems. Must have skills : Apache Spark Good to have skills : NA Minimum 5 Year(s) Of Experience Is Required Educational Qualification : 15 years full time education Summary: As a Data Engineer, you will design, develop, and maintain data solutions that facilitate data generation, collection, and processing. Your typical day will involve creating data pipelines, ensuring data quality, and implementing ETL processes to migrate and deploy data across various systems. You will collaborate with cross-functional teams to understand their data needs and provide effective solutions, ensuring that the data infrastructure is robust and scalable to meet the demands of the organization. Roles & Responsibilities: - Expected to be an SME. - Collaborate and manage the team to perform. - Responsible for team decisions. - Engage with multiple teams and contribute on key decisions. - Provide solutions to problems for their immediate team and across multiple teams. - Mentor junior team members to enhance their skills and knowledge in data engineering. Professional & Technical Skills: - Must To Have Skills: Proficiency in Apache Spark. - Strong understanding of data pipeline architecture and design. - Experience with ETL processes and data integration techniques. - Familiarity with data warehousing concepts and technologies. - Knowledge of data quality frameworks and best practices. Additional Information: - The candidate should have minimum 7.5 years of experience in Apache Spark. - This position is based in Chennai. - A 15 years full time education is required.

Posted 1 day ago

Apply

2.0 - 5.0 years

0 Lacs

Mohali district, India

On-site

We are looking for an experienced Python Developer to join our dynamic development team. The ideal candidate will have 2 to 5 years of experience in building scalable backend applications and APIs using modern Python frameworks. This role requires a strong foundation in object-oriented programming, web technologies, and collaborative software development. You will work closely with the design, frontend, and DevOps teams to deliver robust and high-performance solutions. Key Responsibilities • Develop, test, and maintain backend applications using Django, Flask, or FastAPI. • Build RESTful APIs and integrate third-party services to enhance platform capabilities. • Utilize data handling libraries like Pandas and NumPy for efficient data processing. • Write clean, maintainable, and well-documented code that adheres to industry best practices. • Participate in code reviews and mentor junior developers. • Collaborate in Agile teams using Scrum or Kanban workflows. • Troubleshoot and debug production issues with a proactive and analytical approach. Required Qualifications • 2 to 5 years of experience in backend development with Python. • Proficiency in core and advanced Python concepts, including OOP and asynchronous programming. • Strong command over at least one Python framework (Django, Flask, or FastAPI). • Experience with data libraries like Pandas and NumPy. • Understanding of authentication/authorization mechanisms, middleware, and dependency injection. • Familiarity with version control systems like Git. • Comfortable working in Linux environments. Must-Have Skills • Expertise in backend Python development and web frameworks. • Experience with Generative AI frameworks (e.g., LangChain, Transformers, OpenAI APIs). • Strong debugging, problem-solving, and optimization skills. • Experience with API development and micro services architecture. • Deep understanding of software design principles and security best practices. Good-to-Have Skills • Exposure to Machine Learning libraries (e.g., Scikit-learn, TensorFlow, PyTorch). • Knowledge of containerization tools (Docker, Kubernetes). • Familiarity with web servers (e.g., Apache, Nginx) and deployment architectures. • Understanding of asynchronous programming and task queues (e.g., Celery, AsyncIO ). • Familiarity with Agile practices and tools like Jira or Trello. • Exposure to CI/CD pipelines and cloud platforms (AWS, GCP, Azure).

Posted 1 day ago

Apply

10.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

About the role We’re looking for Senior Engineering Manager to lead our Data / AI Platform and MLOps teams at slice. In this role, you’ll be responsible for building and scaling a high-performing team that powers data infrastructure, real-time streaming, ML enablement, and data accessibility across the company. You'll partner closely with ML, product, platform, and analytics stakeholders to build robust systems that deliver high-quality, reliable data at scale. You will drive AI initiatives to centrally build AP platform and apps which can be leveraged by various functions like legal, CX, product in a secured manner This is a hands-on leadership role perfect for someone who enjoys solving deep technical problems while growing people and teams. What You Will Do Lead and grow the data platform pod focused on all aspects of data (batch + real-time processing, ML platform, AI tooling, Business reporting, Data products – enabling product experience through data) Maintain hands-on technical leadership - lead by example through code reviews, architecture decisions, and direct technical contribution Partner closely with product and business stakeholders to identify data-driven opportunities and translate business requirements into scalable data solutions Own the technical roadmap for our data platform including infra modernization, performance, scalability, and cost efficiency Drive the development of internal data products like self-serve data access, centralized query layers, and feature stores Build and scale ML infrastructure with MLOps best practices including automated pipelines, model monitoring, and real-time inference systems Lead AI platform development for hosting LLMs, building secure AI applications, and enabling self-service AI capabilities across the organization Implement enterprise AI governance including model security, access controls, and compliance frameworks for internal AI applications Collaborate with engineering leaders across backend, ML, and security to align on long-term data architecture Establish and enforce best practices around data governance, access controls, and data quality Ensure regulatory compliance with GDPR, PCI-DSS, SOX through automated compliance monitoring and secure data pipelines Implement real-time data processing for fraud detection and risk management with end-to-end encryption and audit trails Coach engineers and team leads through regular 1:1s, feedback, and performance conversations What You Will Need 10+ years of engineering experience, including 2+ years managing data or infra teams with proven hands-on technical leadership Strong stakeholder management skills with experience translating business requirements into data solutions and identifying product enhancement opportunities Strong technical background in data platforms, cloud infrastructure (preferably AWS), and distributed systems Experience with tools like Apache Spark, Flink, EMR, Airflow, Trino/Presto, Kafka, and Kubeflow/Ray plus modern stack: dbt, Databricks, Snowflake, Terraform Hands on experience building AI/ML platforms including MLOps tools and experience with LLM hosting, model serving, and secure AI application development Proven experience improving performance, cost, and observability in large-scale data systems Expert-level cloud platform knowledge with container orchestration (Kubernetes, Docker) and Infrastructure-as-Code Experience with real-time streaming architectures (Kafka, Redpanda, Kinesis) Understanding of AI/ML frameworks (TensorFlow, PyTorch), LLM hosting platforms, and secure AI application development patterns Comfort working in fast-paced, product-led environments with ability to balance innovation and regulatory constraints Bonus: Experience with data security and compliance (PII/PCI handling), LLM infrastructure, and fintech regulations Life at slice Life so good, you’d think we’re kidding: Competitive salaries. Period. An extensive medical insurance that looks out for our employees & their dependents. We’ll love you and take care of you, our promise. Flexible working hours. Just don’t call us at 3AM, we like our sleep schedule. Tailored vacation & leave policies so that you enjoy every important moment in your life. A reward system that celebrates hard work and milestones throughout the year. Expect a gift coming your way anytime you kill it here. Learning and upskilling opportunities. Seriously, not kidding. Good food, games, and a cool office to make you feel like home. An environment so good, you’ll forget the term “colleagues can’t be your friends”.

Posted 1 day ago

Apply

6.0 years

0 Lacs

Gurugram, Haryana, India

On-site

Job Title: Senior Google Cloud Platform (GCP) Data Engineer Location: Hybrid (Bengaluru, India) Job Type: Full-Time Experience Required: Minimum 6 Years Joining: Immediate or within 1 week About the Company: Tech T7 Innovations is a global IT solutions provider known for delivering cutting-edge technology services to enterprises across various domains. With a team of seasoned professionals, we specialize in software development, cloud computing, data engineering, machine learning, and cybersecurity. Our focus is on leveraging the latest technologies and best practices to create scalable, reliable, and secure solutions for our clients. Job Summary: We are seeking a highly skilled Senior GCP Data Engineer with over 6 years of experience in data engineering and extensive hands-on expertise in Google Cloud Platform (GCP). The ideal candidate must have a strong foundation in GCS, BigQuery, Apache Airflow/Composer, and Python, with a demonstrated ability to design and implement robust, scalable data pipelines in a cloud environment. Roles and Responsibilities: Design, develop, and deploy scalable and secure data pipelines using Google Cloud Platform components including GCS, BigQuery, and Airflow. Develop and manage robust ETL/ELT workflows using Python and integrate with orchestration tools such as Apache Airflow or Cloud Composer. Collaborate with data scientists, analysts, and business stakeholders to gather requirements and deliver reliable and efficient data solutions. Optimize BigQuery performance using best practices such as partitioning, clustering, schema design , and query tuning . Manage, monitor, and maintain data lake and data warehouse environments with high availability and integrity. Automate pipeline monitoring, error handling, and alerting mechanisms to ensure seamless and reliable data delivery . Contribute to architecture decisions involving data modeling, data flow, and integration strategies in a cloud-native environment. Ensure compliance with data governance , privacy, and security policies as per enterprise and regulatory standards. Mentor junior engineers and drive best practices in cloud engineering and data operations . Mandatory Skills: Google Cloud Platform (GCP): In-depth hands-on experience with GCS, BigQuery, IAM, and Cloud Functions. BigQuery (BQ): Expertise in large-scale analytics, schema optimization, and data modeling. Google Cloud Storage (GCS): Strong understanding of data lifecycle management, access controls, and best practices. Apache Airflow / Cloud Composer: Proficiency in writing and managing complex DAGs for data orchestration. Python Programming: Advanced skills in automation, API integration, and data processing using libraries like Pandas, PySpark, etc. Preferred Qualifications: Experience with CI/CD pipelines for data infrastructure and workflows. Exposure to other GCP services like Dataflow , Pub/Sub , and Cloud Functions . Familiarity with Infrastructure as Code (IaC) tools such as Terraform . Strong communication and analytical skills for problem-solving and stakeholder engagement. GCP Certifications (e.g., Professional Data Engineer) will be a significant advantage

Posted 1 day ago

Apply

0 years

0 Lacs

Gurugram, Haryana, India

On-site

Join us as a Solution Architect This is an opportunity for an experienced Solution Architect to help us define the high-level technical architecture and design for a key data analytics and insights platform that powers the personalised customer engagement initiatives of the business You’ll define and communicate a shared technical and architectural vision of end-to-end designs that may span multiple platforms and domains Take on this exciting new challenge and hone your technical capabilities while advancing your career and building your network across the bank We're offering this role at vice president level What you'll do We’ll look to you to influence and promote the collaboration across platform and domain teams on the solution delivery. Partnering with platform and domain teams, you’ll elaborate the solution and its interfaces, validating technology assumptions, evaluating implementation alternatives, and creating the continuous delivery pipeline. You’ll also provide analysis of options and deliver end-to-end solution designs using the relevant building blocks, as well as producing designs for features that allow frequent incremental delivery of customer value. On Top Of This, You’ll Be Owning the technical design and architecture development that aligns with bank-wide enterprise architecture principles, security standards, and regulatory requirements Participating in activities to shape requirements, validating designs and prototypes to deliver change that aligns with the target architecture Promoting adaptive design practices to drive collaboration of feature teams around a common technical vision using continuous feedback Making recommendations of potential impacts to existing and prospective customers of the latest technology and customer trends Engaging with the wider architecture community within the bank to ensure alignment with enterprise standards Presenting solutions to governance boards and design review forums to secure approvals Maintaining up-to-date architectural documentation to support audits and risk assessment The skills you'll need As a Solution Architect, you’ll bring expert knowledge of application architecture, and in business data or infrastructure architecture with working knowledge of industry architecture frameworks such as TOGAF or ArchiMate. You’ll also need an understanding of Agile and contemporary methodologies with experience of working in Agile teams. A certification in cloud solutions like AWS Solution Architect is desirable while an awareness of agentic AI based application architectures using LLMs like OpenAI and agentic frameworks like LangGraph, CrewAI will be advantageous. Furthermore, You’ll Need Strong experience in solution design, enterprise architecture patterns, and cloud-native applications including the ability to produce multiple views to highlight different architectural concerns A familiarity with understanding big data processing in the banking industry Hands-on experience in AWS services, including but not limited to S3, Lambda, EMR, DynamoDB and API Gateway An understanding of big data processing using frameworks or platforms like Spark, EMR, Kafka, Apache Flink or similar Knowledge of real-time data processing, event-driven architectures, and microservices Conceptual understanding of data modelling and analytics, machine learning or deep-learning models The ability to communicate complex technical concepts clearly to peers and leadership level colleagues

Posted 1 day ago

Apply

5.0 - 7.0 years

25 - 28 Lacs

Pune, Maharashtra, India

On-site

Job Description We are looking for a Big Data Engineer who will work on building, and managing Big Data Pipelines for us to deal with the huge structured data sets that we use as an input to accurately generate analytics at scale for our valued Customers. The primary focus will be on choosing optimal solutions to use for these purposes, then maintaining, implementing, and monitoring them. You will also be responsible for integrating them with the architecture used across the company. Core Responsibilities Design, build, and maintain robust data pipelines (batch or streaming) that process and transform data from diverse sources. Ensure data quality, reliability, and availability across the pipeline lifecycle. Collaborate with product managers, architects, and engineering leads to define technical strategy. Participate in code reviews, testing, and deployment processes to maintain high standards. Own smaller components of the data platform or pipelines and take end-to-end responsibility. Continuously identify and resolve performance bottlenecks in data pipelines. Take initiatives, and show the drive to pick up new stuff proactively, and work as a Senior Individual contributor on the multiple products and features we have. Required Qualifications 5 to 7 years of experience in Big Data or data engineering roles. JVM based languages like Java or Scala are preferred. For someone having solid Big Data experience, Python would also be OK. Proven and demonstrated experience working with distributed Big Data tools and processing frameworks like Apache Spark or equivalent (for processing), Kafka or Flink (for streaming), and Airflow or equivalent (for orchestration). Familiarity with cloud platforms (e.g., AWS, GCP, or Azure), including services like S3, Glue, BigQuery, or EMR. Ability to write clean, efficient, and maintainable code. Good understanding of data structures, algorithms, and object-oriented programming. Tooling & Ecosystem Use of version control (e.g., Git) and CI/CD tools. Experience with data orchestration tools (Airflow, Dagster, etc.). Understanding of file formats like Parquet, Avro, ORC, and JSON. Basic exposure to containerization (Docker) or infrastructure-as-code (Terraform is a plus). Skills: airflow,pipelines,data engineering,scala,ci,python,flink,aws,data orchestration,java,kafka,gcp,parquet,orc,azure,cd,dagster,ci/cd,git,avro,terraform,json,docker,apache spark,big data

Posted 1 day ago

Apply

6.0 - 10.0 years

20 - 50 Lacs

Chennai, Tamil Nadu, India

On-site

Position: Technical Lead – HPC 26821 Location: Chennai, India Joining Time: Immediate to 60 days Key Responsibilities Design, implement, and support high-performance compute clusters (HPC) Work with CPU/GPU architectures, scalable storage, and high-bandwidth interconnects Generate hardware BOMs, manage vendors, and oversee hardware release activities Configure Linux OS environments for HPC systems (SuSE, RedHat, Rocky, Ubuntu) Define and assemble system-level specifications and performance requirements Ensure timely delivery of projects with strong documentation and support for manufacturing and customer teams Deliver golden images, procedures, scripts, and release documents Required Qualifications 6 to 10 years of proven experience in relevant HPC roles Strong knowledge of HPC infrastructure including servers, GPUs, networking, storage, BIOS & BMC In-depth, distribution-agnostic Linux expertise (SuSE, RedHat, Rocky, Ubuntu) Experience with PXE booting, System-D, and Linux HA Understanding of TCP/IP fundamentals and protocols (DNS, DHCP, HTTP, LDAP, SMTP) Strong scripting skills in Shell and Python Experience with configuration management tools like Salt, Chef, or Puppet Bachelor’s or Master’s degree (BE/BTech/MCA/MSc) in Computer Engineering, Electrical Engineering, or related fields (Note: Candidates with only diploma or 3-year degrees such as BCA/BSc are not eligible) Preferred Skills Exposure to DevOps tools: Jenkins, Git-based repositories, Docker, Singularity Experience with Kubernetes, Prometheus, Grafana Familiarity with web servers like Apache/Nginx, reverse proxy, and load balancing setups (e.g., HA Proxy) Additional Requirements Minimum 7 years of experience in HPC, cluster setup, and Linux systems Must demonstrate job stability (minimum 2 years tenure at previous organizations) No current gaps in employment Strong communication, time management, multitasking, and adaptability Interview Process 3 rounds of technical discussions 1 HR interview Skills: high-bandwidth interconnects,servers,cpu/gpu architectures,proxy,interview,linux,kubernetes,devops tools (jenkins, git, docker, singularity),management,scalable storage,linux os environments (suse, redhat, rocky, ubuntu),tcp/ip fundamentals and protocols (dns, dhcp, http, ldap, smtp),hpc infrastructure,scripting (shell, python),configuration management (salt, chef, puppet),monitoring tools (prometheus, grafana),web servers (apache, nginx, ha proxy)

Posted 1 day ago

Apply

3.0 years

0 Lacs

Noida, Uttar Pradesh, India

On-site

About Us: Paytm is India's leading mobile payments and financial services distribution company. Pioneer of the mobile QR payments revolution in India, Paytm builds technologies that help small businesses with payments and commerce. Paytm’s mission is to serve half a billion Indians and bring them to the mainstream economy with the help of technology. Job Summary: Build systems for collection & transformation of complex data sets for use in production systems Collaborate with engineers on building & maintaining back-end services Implement data schema and data management improvements for scale and performance Provide insights into key performance indicators for the product and customer usage Serve as team's authority on data infrastructure, privacy controls and data security Collaborate with appropriate stakeholders to understand user requirements Support efforts for continuous improvement, metrics and test automation Maintain operations of live service as issues arise on a rotational, on-call basis Verify whether data architecture meets security and compliance requirements and expectations .Should be able to fast learn and quickly adapt at rapid pace. java/scala, SQL, Minimum Qualifications: Bachelor's degree in computer science, computer engineering or a related field, or equivalent experience 3+ years of progressive experience demonstrating strong architecture, programming and engineering skills. Firm grasp of data structures, algorithms with fluency in programming languages like Java, Python, Scala. Strong SQL language and should be able to write complex queries. Strong Airflow like orchestration tools. Demonstrated ability to lead, partner, and collaborate cross functionally across many engineering organizations Experience with streaming technologies such as Apache Spark, Kafka, Flink. Backend experience including Apache Cassandra, MongoDB and relational databases such as Oracle, PostgreSQL AWS/GCP solid hands on with 4+ years of experience. Strong communication and soft skills. Knowledge and/or experience with containerized environments, Kubernetes, docker. Experience in implementing and maintained highly scalable micro services in Rest, Spring Boot, GRPC. Appetite for trying new things and building rapid POCs" Key Responsibilities : Design, develop, and maintain scalable data pipelines to support data ingestion, processing, and storage Implement data integration solutions to consolidate data from multiple sources into a centralized data warehouse or data lake Collaborate with data scientists and analysts to understand data requirements and translate them into technical specifications Ensure data quality and integrity by implementing robust data validation and cleansing processes Optimize data pipelines for performance, scalability, and reliability. Develop and maintain ETL (Extract, Transform, Load) processes using tools such as Apache Spark, Apache NiFi, or similar technologies .Monitor and troubleshoot data pipeline issues, ensuring timely resolution and minimal downtimeImplement best practices for data management, security, and complianceDocument data engineering processes, workflows, and technical specificationsStay up-to-date with industry trends and emerging technologies in data engineering and big data. Compensation: If you are the right fit, we believe in creating wealth for you with enviable 500 mn+ registered users, 25 mn+ merchants and depth of data in our ecosystem, we are in a unique position to democratize credit for deserving consumers & merchants – and we are committed to it. India’s largest digital lending story is brewing here. It’s your opportunity to be a part of the story!

Posted 1 day ago

Apply

5.0 years

0 Lacs

Ahmedabad, Gujarat, India

On-site

Key Responsibilities: · Azure Cloud s Databricks: o Design and build efficient data pipelines using Azure Databricks (PySpark). o Implement business logic for data transformation and enrichment at scale. o Manage and optimize Delta Lake storage solutions. · API Development: o Develop REST APIs using FastAPI to expose processed data. o Deploy APIs on Azure Functions for scalable and serverless data access. · Data Orchestration s ETL: o Develop and manage Airflow DAGs to orchestrate ETL processes. o Ingest and process data from various internal and external sources on a scheduled basis. · Database Management: o Handle data storage and access using PostgreSQL and MongoDB. o Write optimized SQL queries to support downstream applications and analytics. · Collaboration: o Work cross-functionally with teams to deliver reliable, high-performance data solutions. o Follow best practices in code quality, version control, and documentation. Required Skills s Experience: · 5+ years of hands-on experience as a Data Engineer. · Strong experience with Azure Cloud services. · Proficient in Azure Databricks, PySpark, and Delta Lake. · Solid experience with Python and FastAPI for API development. · Experience with Azure Functions for serverless API deployments. · Skilled in managing ETL pipelines using Apache Airflow. · Hands-on experience with PostgreSQL and MongoDB. · Strong SQL skills and experience handling large datasets.

Posted 1 day ago

Apply

10.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Job Title: Lead Platform Engineer – AWS Data Platform Location: Hybrid – Hyderabad, Telangana Experience: 10+ years Employment Type: Full-Time Apply Now --- About the Role Infoslab is hiring on behalf of our client, a leading healthcare technology company committed to transforming healthcare through data. We are seeking a Lead Platform Engineer to architect, implement, and lead the development of a secure, scalable, and cloud-native data platform on AWS. This role combines deep technical expertise with leadership responsibilities. You will build the foundation that supports critical business intelligence, analytics, and machine learning applications across the organization. --- Key Responsibilities Architect and build a highly available, cloud-native data platform using AWS services such as S3, Glue, Redshift, Lambda, and ECS. Design reusable platform components and frameworks to support data engineering, analytics, and ML pipelines. Build and maintain CI/CD pipelines, GitOps workflows, and infrastructure-as-code using Terraform. Drive observability, operational monitoring, and incident response processes across environments. Ensure platform security, compliance (HIPAA, SOC2), and audit-readiness in partnership with InfoSec. Lead and mentor a team of platform engineers, promoting best practices in DevOps and cloud infrastructure. Collaborate with cross-functional teams to deliver reliable and scalable data platform capabilities. --- Required Skills and Experience 10+ years of experience in platform engineering, DevOps, or infrastructure roles with a data focus. 3+ years in technical leadership or platform engineering management. Deep experience with AWS services, including S3, Glue, Redshift, Lambda, ECS, and Athena. Strong hands-on experience with Python or Scala, and automation tooling. Proficient in Terraform and CI/CD tools (GitHub Actions, Jenkins, etc.). Advanced knowledge of Apache Spark for both batch and streaming workloads. Proven track record of building secure, scalable, and compliant infrastructure. Strong understanding of observability, reliability engineering, and infrastructure automation. --- Preferred Qualifications Experience with containerization and orchestration (Docker, Kubernetes). Familiarity with Data Mesh principles or domain-driven data platform design. Background in healthcare or other regulated industries. Experience integrating data platforms with BI tools like Tableau or Looker. --- Why Join Contribute to a mission-driven client transforming healthcare through intelligent data platforms. Lead high-impact platform initiatives that support diagnostics, research, and machine learning. Work with modern engineering practices including IaC, GitOps, and serverless architectures. Be part of a collaborative, hybrid work culture focused on innovation and technical excellence.

Posted 1 day ago

Apply

4.0 - 6.0 years

0 Lacs

India

Remote

Location : Remote Experience : 4-6 years Position : Gen-AI Developer (Hands-on) Technical Requirements: Hands-on Data Science , Agentic AI, AI/Gen AI / ML /NLP Azure services (App Services, Containers, AI Foundry, AI Search, Bot Services) Experience in C# Semantic Kernel Strong background in working with LLMs and building Gen AI applications AI agent concepts .NET Aspire End-to-end environment setup for ML/LLM/Agentic AI (Dev/Prod/Test) Machine Learning & LLM deployment and development Model training, fine-tuning, and deployment Kubernetes, Docker, Serverless architecture Infrastructure as Code (Terraform, Azure Resource Manager) Performance Optimization & Cost Management Cloud cost management & resource optimization, auto-scaling Cost efficiency strategies for cloud resources MLOps frameworks (Kubeflow, MLflow, TFX) Large language model fine-tuning and optimization Data pipelines (Apache Airflow, Kafka, Azure Data Factory) Data storage (SQL/NoSQL, Data Lakes, Data Warehouses) Data processing and ETL workflows Cloud security practices (VPCs, firewalls, IAM) Secure cloud architecture and data privacy CI/CD pipelines (Azure DevOps, GitHub Actions, Jenkins) Automated testing and deployment for ML models Agile methodologies (Scrum, Kanban) Cross-functional team collaboration and sprint management Experience with model fine-tuning and infrastructure setup for local LLMs Custom model training and deployment pipeline design Good communication skills (written and oral)

Posted 1 day ago

Apply

7.0 years

0 Lacs

Itanagar, Arunachal Pradesh, India

On-site

Job Overview We are seeking a highly skilled and experienced Lead Data Engineer AWS to spearhead the design, development, and optimization of our cloud-based data infrastructure. As a technical leader, you will drive scalable data solutions using AWS services and modern data engineering tools, ensuring robust data pipelines and architectures for real-time and batch data processing. Responsibilities The ideal candidate is a hands-on technologist with a deep understanding of distributed data systems, cloud-native data services, and team leadership in Agile Responsibilities : Design, build, and maintain scalable, fault-tolerant, and secure data pipelines using AWS-native services (e.g., Glue, EMR, Lambda, S3, Redshift, Athena, Kinesis). Lead end-to-end implementation of data architecture strategies including ingestion, storage, transformation, and data governance. Collaborate with data scientists, analysts, and application developers to understand data requirements and deliver optimal solutions. Ensure best practices for data quality, data cataloging, lineage tracking, and metadata management using tools like AWS Glue Data Catalog or Apache Atlas. Optimize data pipelines for performance, scalability, and cost-efficiency across structured and unstructured data sources. Mentor and lead a team of data engineers, providing technical guidance, code reviews, and architecture recommendations. Implement data modeling techniques (OLTP/OLAP), partitioning strategies, and data warehousing best practices. Maintain CI/CD pipelines for data infrastructure using tools such as AWS CodePipeline, Git, and Monitor production systems and lead incident response and root cause analysis for data infrastructure issues. Drive innovation by evaluating emerging technologies and proposing improvements to existing data platform Skills & Qualifications : Minimum 7 years of experience in data engineering with at least 3+ years in a lead or senior engineering role. Strong hands-on experience with AWS data services: S3, Redshift, Glue, Lambda, EMR, Athena, Kinesis, RDS, DynamoDB. Advanced proficiency in Python/Scala/Java for ETL development and data transformation logic. Deep understanding of distributed data processing frameworks (e.g., Apache Spark, Hadoop). Solid grasp of SQL and experience with performance tuning in large-scale environments. Experience implementing data lakes, lakehouse architecture, and data warehousing solutions on cloud. Knowledge of streaming data pipelines using Kafka, Kinesis, or AWS MSK. Proficiency with infrastructure-as-code (IaC) using Terraform or AWS CloudFormation. Experience with DevOps practices and tools such as Docker, Git, Jenkins, and monitoring tools (CloudWatch, Prometheus, Grafana). Expertise in data governance, security, and compliance in cloud environments (ref:hirist.tech)

Posted 1 day ago

Apply

5.0 years

0 Lacs

Ahmedabad, Gujarat, India

On-site

Key Responsibilities 5+ years professional experience with quality assurance and/or application development and Java programming with Selenium, Junit and TestNg Hands-on experience with Jenkins, Maven, Eclipse and git. Collaborate with developers and product managers to design specific testing strategies for features being developed. Generate comprehensive test plans and test cases, and execute them for feature verification and regression. Strong fundamentals in Object-Oriented Design and Data Structures. Experienced Engineer with integration-testing framework like Selenium and Object-Oriented Programming Language Apache Groovy. Contributes to participating in all quality activities within the Quality Engineering team including Testing, Automation, Test Planning, Design, Debugging, Execution, Review, Customer Support. Demonstrable experience with Agile and Test-Driven development. Reporting and Monitoring Drives continuous improvement initiatives that focus on software quality and delivery of delightful User Experiences Design and implement test automation for new features and existing features for regression testing. High-level understanding of in-memory distributed data storage systems like Memcache, Ehcache, Hazelcast. Key Skills Hands-on Experience with SQL queries and solid understanding of database concepts. Good understanding of software design patterns, algorithms and data structures. Good understanding of web service APIs. Experience with agile methodology and working in enterprise cloud based technologies. Proven ability to learn new tools and technologies with minimal guidance. Should possess excellent oral, written, problem-solving and analytical skills. Must be able to succeed with minimal resources and MS/BS/ME/BE in Computer Science or equivalent. (ref:hirist.tech)

Posted 1 day ago

Apply

3.0 years

0 Lacs

Ahmedabad, Gujarat, India

On-site

Key Responsibilities Develop, maintain, and enhance web applications using Laravel and Vue.js. Collaborate with UI/UX designers and product managers to implement responsive, user-centric interfaces. Integrate RESTful APIs, third-party libraries, and external data sources. Develop and maintain relational database schemas using MySQL. Ensure the application's performance, scalability, and security. Write clean, well-structured, and efficient code following modern PHP and JavaScript standards. Perform thorough testing and debugging of applications to ensure quality and performance. Participate in code reviews and contribute to continuous improvement of coding standards. Convert business and functional requirements into detailed system designs and architecture. Document features, APIs, technical specifications, and workflows. Collaborate with DevOps for CI/CD pipeline integration, version control (Git), and deployment workflows. Technical Skills & Qualifications 3+ years of professional experience in web application development. Strong expertise in PHP and Laravel Framework (v8 or newer). Solid experience with Vue.js (Vue 2 or Vue 3) including Vue CLI, Vuex, and component-driven development. Proficient in JavaScript, HTML5, CSS3, and modern frontend workflows. Deep understanding of MySQL including stored procedures, performance tuning, and complex joins. Experience building and integrating RESTful APIs and web services. Familiarity with version control systems such as Git (GitHub/GitLab/Bitbucket). Good understanding of MVC architecture, ORM (Eloquent), and Laravel middleware. Working knowledge of unit testing and automated testing frameworks. Exposure to agile methodologies, Scrum practices, and issue tracking tools like Jira or Trello. Preferred Qualifications Experience with Laravel Livewire, Inertia.js, or similar technologies is a plus. Understanding of Webpack, Vite, or other build tools. Experience with Docker, Kubernetes, or containerized environments is an advantage. Familiarity with cloud platforms like AWS, Azure, or GCP is beneficial. Basic knowledge of CI/CD pipelines, Nginx/Apache, and Linux server environments. (ref:hirist.tech)

Posted 1 day ago

Apply

2.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

About This Role Wells Fargo is seeking an Information Security Engineer. We believe in the power of working together because great ideas can come from anyone. Through collaboration, any employee can have an impact and make a difference for the entire company. Explore opportunities with us for a career in a supportive environment where you can learn and grow. In This Role, You Will Participate in security consulting on small projects for internal clients to ensure uniformity with corporate information, security policy, and standards Track or remediate vulnerabilities and security issues Review and correlate security logs Assist with the design, documentation, testing, maintenance, and troubleshooting of security solutions related to networking, cryptography, cloud, authentication and directory services, email, internet, applications, and endpoint security Provide technical support for security related issues Utilize industry leading security solutions and best practices to implement one or more components of information security such as availability, integrity, confidentiality, risk management, threat identification, modeling, monitoring, incident response, access management, and business continuity Collaborate and consult with peers, colleagues and managers to resolve issues and achieve goals Interface with more experienced technologists Required Qualifications: 2+ years of Information Security Engineering experience, or equivalent demonstrated through one or a combination of the following: work experience, training, military experience, education Desired Qualifications: Primary Skill: PlainID, Any IAM tool, Shell Script, Linux/Unix, Oracle/SQL, AppDynamics/Splunk, SaaS, Rest API. Secondary Skill:, Java, PL/SQL, WebLogic/Tomcat/Apache/JBoss, Middleware tool, Automation. Provides support to PlainID - Authorization tool, Product Version Upgrade and Server maintenance. Provides 24x5 Application/Production support in areas of incident management through Service Now, monitoring and request fulfilment. Provide support in handling problem tickets, submitting CHANGE requests in Service Now, Server maintenance, BCP exercise, administration and deployments. Performing Health Check, Maintenance & Post-Release activities. Monitoring logs via SPLUNK. Full time Graduate from reputed university. 2+ years of experience in Application support and troubleshooting, Application maintenance, monitoring and Build/Deployment. Experience in PlainID. Experience in Oracle Identity Manager (OIM) or any IAM tool. Experience in Oracle/SQL Server queries/reporting. Experience in Production/Application Support of Java/J2EE based applications on Unix Platform. Exposure to any of the Middleware/Application Servers WebLogic, Tomcat, Apache, Jboss etc. Willing to work in on call & rotational shifts. Experience in Monitoring Tools Splunk, AppDynamics etc. Should have Excellent analytical, problem solving and multitasking skills. Experience working in an Agile/Scrum development process. Works well with partner teams and peers toward established goals and timelines. Works well under self-direction on assigned tasks. Provides active participation and leadership in team duties and responsibilities. Initiates and promotes changes to team processes to enhance automation. Experience working in Remedy/ServiceNOW/Pac2000 or any ticketing tool and should have ITIL exposure. Excellent verbal, written, and interpersonal communication skills. Strong vendor management skills. Knowledge/Skills/Ability Strong organizational, multi-tasking, and prioritizing skills Job Expectations: Proven ability to complete tasks; including the planning of work from initiation through implementation while demonstrating the ability to meet project completion dates with acceptable levels of supporting documentation and quality History of past projects which demonstrate the ability to complete tasks; including the planning of work from initiation through implementation while demonstrating the ability to meet project completion dates with acceptable levels of supporting documentation and quality. Experience with systems monitoring tools such as HP OpenView, Nagios, Zabbix and Splunk Advanced Information Security technical skills. Industry certification like Security +/ ISACA CSX Fundamentals, Red Hat Certified Specialist in Server Security and Hardening exam (EX413), Red Hat Certified Engineer (RHCE). Posting End Date: 7 Aug 2025 Job posting may come down early due to volume of applicants. We Value Equal Opportunity Wells Fargo is an equal opportunity employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, disability, status as a protected veteran, or any other legally protected characteristic. Employees support our focus on building strong customer relationships balanced with a strong risk mitigating and compliance-driven culture which firmly establishes those disciplines as critical to the success of our customers and company. They are accountable for execution of all applicable risk programs (Credit, Market, Financial Crimes, Operational, Regulatory Compliance), which includes effectively following and adhering to applicable Wells Fargo policies and procedures, appropriately fulfilling risk and compliance obligations, timely and effective escalation and remediation of issues, and making sound risk decisions. There is emphasis on proactive monitoring, governance, risk identification and escalation, as well as making sound risk decisions commensurate with the business unit's risk appetite and all risk and compliance program requirements. Candidates applying to job openings posted in Canada: Applications for employment are encouraged from all qualified candidates, including women, persons with disabilities, aboriginal peoples and visible minorities. Accommodation for applicants with disabilities is available upon request in connection with the recruitment process. Applicants With Disabilities To request a medical accommodation during the application or interview process, visit Disability Inclusion at Wells Fargo . Drug and Alcohol Policy Wells Fargo maintains a drug free workplace. Please see our Drug and Alcohol Policy to learn more. Wells Fargo Recruitment And Hiring Requirements Third-Party recordings are prohibited unless authorized by Wells Fargo. Wells Fargo requires you to directly represent your own experiences during the recruiting and hiring process. Reference Number R-474863

Posted 1 day ago

Apply

7.0 years

0 Lacs

India

On-site

Company Description 👋🏼 We're Nagarro. We are a Digital Product Engineering company that is scaling in a big way! We build products, services, and experiences that inspire, excite, and delight. We work at scale — across all devices and digital mediums, and our people exist everywhere in the world (17500+ experts across 39 countries, to be exact). Our work culture is dynamic and non-hierarchical. We're looking for great new colleagues. That's where you come in! Job Description REQUIREMENTS: Total experience 7+ years Extensive experience in Java 8 or higher, Spring Framework-Core/Boot/MVC, Hibernate/JPA, and Microservices Architecture. Hands on experience of RDBMS like SQL Server, Oracle, MySQL, PostgreSQL. Strong backend development skills with databases such as MongoDB, Elasticsearch, and PostgreSQL. Expertise in writing high-quality code following object-oriented design principles with a strong balance between performance, extensibility, and maintainability. Experience in SOA based architecture, Web Services (Apache/CXF/JAXWS/JAXRS/SOAP/REST). Hands-on Experience in Low- and High-Level Design (LLD + HLD). Strong working experience in RabbitMQ, Kafka, Zookeeper, REST APIs. Expert in CI/CD capabilities required to improve efficiency. Hands-on experience deploying applications to hosted data centers or cloud environments using technologies such as Docker, Kubernetes, Jenkins, Azure DevOps and Google Cloud Platform. A good understanding of UML and design patterns Ability to simplify solutions, optimize processes, and resolve escalated issues efficiently. Strong problem-solving skills and a passion for continuous improvement. Strong communication skills and the ability to collaborate effectively with cross-functional teams. RESPONSIBILITIES: Writing and reviewing great quality code Understanding functional requirements thoroughly and analyzing the client’s needs in the context of the project Envisioning the overall solution for defined functional and non-functional requirements, and being able to define technologies, patterns and frameworks to realize it Determining and implementing design methodologies and tool sets Enabling application development by coordinating requirements, schedules, and activities. Being able to lead/support UAT and production roll outs Creating, understanding and validating WBS and estimated effort for given module/task, and being able to justify it Addressing issues promptly, responding positively to setbacks and challenges with a mindset of continuous improvement Giving constructive feedback to the team members and setting clear expectations. Helping the team in troubleshooting and resolving of complex bugs Coming up with solutions to any issue that is raised during code/design review and being able to justify the decision taken Carrying out POCs to make sure that suggested design/technologies meet the requirements. Qualifications Bachelor’s or master’s degree in computer science, Information Technology, or a related field.

Posted 1 day ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies