Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
3.0 years
16 - 20 Lacs
kanpur, uttar pradesh, india
Remote
Experience : 3.00 + years Salary : INR 1600000-2000000 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: SenseCloud) (*Note: This is a requirement for one of Uplers' client - A Seed-Funded B2B SaaS Company – Procurement Analytics) What do you need for this opportunity? Must have skills required: open-source, Palantir, privacy techniques, rag, Snowflake, LangChain, LLM, MLOps, AWS, Docker, Python A Seed-Funded B2B SaaS Company – Procurement Analytics is Looking for: Join the Team Revolutionizing Procurement Analytics at SenseCloud Imagine working at a company where you get the best of all worlds: the fast-paced execution of a startup and the guidance of leaders who’ve built things that actually work at scale. We’re not just rethinking how procurement analytics is done — we’re redefining them. At Sensecloud, we envision a future where Procurement data management and analytics is as intuitive as your favorite app. No more complex spreadsheets, no more waiting in line to get IT and analytics teams’ attention, no more clunky dashboards —just real-time insights, smooth automation, and a frictionless experience that helps companies make fast decisions. If you’re ready to help us build the future of procurement analytics, come join the ride. You'll work alongside the brightest minds in the industry, learn cutting-edge technologies, and be empowered to take on challenges that will stretch your skills and your thinking. If you’re ready to help us build the future of procurement, analytics come join the ride. About The Role We’re looking for an AI Engineer who can design, implement, and productionize LLM-powered agents that solve real-world enterprise problems—think automated research assistants, data-driven copilots, and workflow optimizers. You’ll own projects end-to-end: scoping, prototyping, evaluating, and deploying scalable agent pipelines that integrate seamlessly with our customers’ ecosystems. What you'll do: Architect & build multi-agent systems using frameworks such as LangChain, LangGraph, AutoGen, Google ADK, Palantir Foundry, or custom orchestration layers. Fine-tune and prompt-engineer LLMs (OpenAI, Anthropic, open-source) for retrieval-augmented generation (RAG), reasoning, and tool use. Integrate agents with enterprise data sources (APIs, SQL/NoSQL DBs, vector stores like Pinecone, Elasticsearch) and downstream applications (Snowflake, ServiceNow, custom APIs). Own the MLOps lifecycle: containerize (Docker), automate CI/CD, monitor drift & hallucinations, set up guardrails, observability, and rollback strategies. Collaborate cross-functionally with product, UX, and customer teams to translate requirements into robust agent capabilities and user-facing features. Benchmark & iterate on latency, cost, and accuracy; design experiments, run A/B tests, and present findings to stakeholders. Stay current with the rapidly evolving GenAI landscape and champion best practices in ethical AI, data privacy, and security. Must-Have Technical Skills 3–5 years software engineering or ML experience in production environments. Strong Python skills (async I/O, typing, testing) plus familiarity with TypeScript/Node or Go a bonus. Hands-on with at least one LLM/agent frameworks and platforms (LangChain, LangGraph, Google ADK, LlamaIndex, Emma, etc.). Solid grasp of vector databases (Pinecone, Weaviate, FAISS) and embedding models. Experience building and securing REST/GraphQL APIs and microservices. Cloud skills on AWS, Azure, or GCP (serverless, IAM, networking, cost optimization). Proficient with Git, Docker, CI/CD (GitHub Actions, GitLab CI, or similar). Knowledge of ML Ops tooling (Kubeflow, MLflow, SageMaker, Vertex AI) or equivalent custom pipelines. Core Soft Skills Product mindset: translate ambiguous requirements into clear deliverables and user value. Communication: explain complex AI concepts to both engineers and executives; write crisp documentation. Collaboration & ownership: thrive in cross-disciplinary teams, proactively unblock yourself and others. Bias for action: experiment quickly, measure, iterate—without sacrificing quality or security. Growth attitude: stay curious, seek feedback, mentor juniors, and adapt to the fast-moving GenAI space. Nice-to-Haves Experience with RAG pipelines over enterprise knowledge bases (SharePoint, Confluence, Snowflake). Hands-on with MCP servers/clients, MCP Toolbox for Databases, or similar gateway patterns. Familiarity with LLM evaluation frameworks (LangSmith, TruLens, Ragas). Familiarity with Palantir/Foundry. Knowledge of privacy-enhancing techniques (data anonymization, differential privacy). Prior work on conversational UX, prompt marketplaces, or agent simulators. Contributions to open-source AI projects or published research. Why Join Us? Direct impact on products used by Fortune 500 teams. Work with cutting-edge models and shape best practices for enterprise AI agents. Collaborative culture that values experimentation, continuous learning, and work–life balance. Competitive salary, equity, remote-first flexibility, and professional development budget. How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!
Posted 3 days ago
6.0 years
0 Lacs
bengaluru, karnataka, india
On-site
Who We Are: Netcore Cloud is a MarTech platform helping businesses design, execute, and optimize campaigns across multiple channels. With a strong focus on leveraging data, machine learning, and AI, we empower our clients to make smarter marketing decisions and deliver exceptional customer experiences. Our team is passionate about innovation and collaboration, and we are looking for a talented Senior Data Scientist to join our team and work on impactful, high-scale projects. Role Summary: As a Senior Data Scientist, you’ll work on developing, deploying, and optimizing advanced machine learning solutions across a range of problem statements such as personalization, customer segmentation, predictive modeling, and NLP. This is a high-impact role that will allow you to collaborate across cross-functional teams and contribute directly to our product innovation roadmap. What You'll Do: Model Development and Innovation : Design and implement ML models for problems including recommendation engines, user segmentation, conversion prediction, and NLP-based automation Stay on top of the latest research in AI/ML and evaluate new approaches (e.g., LLMs, generative AI) for practical application. Optimise model performance, scalability, and interpretability for production systems. MLOps & Deployment Contribute to the deployment of models into production environments using MLOps best practices. Leverage platforms such as AWS SageMaker or Google Vertex AI for training, tuning, and scaling models. Data & Experimentation Work with large-scale, real-time datasets to derive actionable insights and build data pipelines for ML training and evaluation. Design and run experiments (A/B testing, uplift modeling, etc.) to validate hypotheses and improve product KPIs. Technology and Tools : Work with large-scale datasets, ensuring data quality and scalability of solutions. Leverage cloud platforms like AWS, GCP for model training and deployment. Utilize tools and libraries such as Python, TensorFlow, PyTorch, Scikit-learn, and Spark for development. With so much innovation happening around Gen AI and LLMs, we prefer folks who have already exposed themselves to this exciting opportunity via AWS Bedrock or Google Vertex. Cross-functional Collaboration Partner with product, engineering, and marketing teams to understand business requirements and translate them into data science solutions. Who You Are: Education : Bachelors from Tier 1 premier institute in relevant field, or; Master’s or PhD in Computer Science, Data Science, Mathematics, or a related field. Experience : 4–6 years of hands-on experience in machine learning or data science roles. Proven expertise in machine learning, deep learning, NLP, and recommendation systems. Hands-on experience deploying ML models in production at scale. Experience in a product-focused or customer-facing domain such as Martech, Adtech, or B2B SaaS is a plus. Technical Skills : Proficiency in Python, SQL, and ML frameworks like TensorFlow or PyTorch. Strong understanding of statistical methods, predictive modeling, and algorithm design. Familiarity with cloud-based solutions (AWS Sagemaker, GCP AI Platform, or similar). Soft Skills : Strong analytical and problem-solving mindset. Excellent communication skills to articulate data-driven insights. A passion for innovation and staying up-to-date with the latest trends in AI/ML. Why Join Us: Opportunity to work on cutting-edge AI/ML projects impacting millions of users. Be part of a collaborative, innovation-driven team in a fast-growing Martech company. Competitive compensation and growth opportunities in a fast-paced environment. We’d love to hear how your background can contribute to building the next generation of intelligent marketing solutions. Apply now or reach out for a conversation. :)
Posted 3 days ago
2.0 - 5.0 years
0 Lacs
hyderabad, telangana, india
On-site
Job Description This is for Consultant - Basis Position: L2, Experience: 2- 5 years, Project name: JLR - Basis team, Country: UK, Location: Anywhere. Travel: No. Job Title: SAP BASIS Experience Required: 2 to 5 Years Job Summary We are seeking a motivated SAP BASIS Administrator to support and maintain our SAP landscape. The ideal candidate will have hands-on experience in SAP system administration, performance tuning, and troubleshooting, with a strong understanding of BASIS components and database management. ________________________________________ Key Responsibilities Install, configure, and maintain SAP systems including ECC, S/4HANA, BW, PI/PO, Solution Manager, and Fiori. Perform system monitoring, performance tuning, and troubleshooting. Manage SAP transports, client copies, and system refreshes. Administer SAP databases (Oracle, HANA, or others) including backups, restores, and upgrades. Apply SAP support packages, kernel upgrades, and enhancement packs. Maintain system documentation and ensure compliance with internal policies. Collaborate with functional and technical teams for issue resolution and system improvements. Support disaster recovery planning and testing. ________________________________________ Required Skills 2–5 years of experience in SAP BASIS administration. Hands-on experience with SAP NetWeaver, HANA, and traditional RDBMS (Oracle/SQL Server). Familiarity with OS-level administration (Linux/Unix/Windows). Understanding of SAP transport management system (TMS). Strong analytical and problem-solving skills. Good communication and documentation abilities. ________________________________________ Preferred Qualifications Bachelor’s degree in Computer Science, Information Technology, or related field. SAP BASIS certification is a plus. Exposure to cloud platforms (AWS, Azure, GCP) and SAP Cloud Connector is desirable. Experience with Solution Manager and system monitoring tools. Get empowered by NTT DATA Business Solutions! We transform. SAP® solutions into Value For any questions related to Job Description, you may connect with below specified contact from Recruiting. Recruiter Name: Antonette Nirisha Recruiter Email ID: Antonette.Nirisha@nttdata.com NTT DATA Business Solutions is a fast-growing international IT company and one of the world’s leading SAP partners. We are a full service provider delivering everything from business consulting to implementation of SAP solutions, including hosting services and support.
Posted 3 days ago
6.0 - 10.0 years
18 - 22 Lacs
chennai
Work from Office
About the Role We are seeking a motivated and skilled OpenShift DevOps Engineer to join our team. In this role, you will be responsible for building, deploying, and maintaining our applications on the OpenShift platform using CI/CD best practices. You will work closely with developers and other operations team members to ensure smooth and efficient delivery of software updates. Responsibilities: Collaborate with customers to understand their specific requirements . Stay up-to-date with industry trends and emerging technologies. Prepare and maintain documentation for processes and procedures. Participate in on-call support and incident response, as needed. Good knowledge virtual networking and storage configuration Working experience with LINUX. Hands-on exp. in K8s services, load balancing & networking modules Proficient in security, firewall,storage concepts Implement and manage OpenShift environments, including deployment configurations, cluster management, and resource optimization. Design and implement CI/CD pipelines using tools like OpenShift Pipelines, GitOps, or other industry standards. Automate build, test, and deployment processes for applications on OpenShift. Troubleshoot and resolve issues related to OpenShift deployments and CI/CD pipelines. Collaborate with developers and other IT professionals to ensure smooth delivery of software updates. Stay up-to-date on the latest trends and innovations in OpenShift and CI/CD technologies. Participate in the continuous improvement of our DevOps practices and processes. Qualifications: Bachelor's degree in Computer Science, or a related field (or equivalent work experience). Familiarity with infrastructure as code (IaC) tools (e.g., Terraform, Ansible). Excellent problem-solving, communication, and teamwork skills. Experience working in Agile/Scrum or other collaborative development environments. Flexible to work in 24/7 support environment Proven experience as a DevOps Engineer or similar role. Strong understanding of OpenShift platform administration and configuration. Experience with CI/CD practices and tools, preferably OpenShift Pipelines, GitOps, or similar options. Experience with containerization technologies (Docker, Kubernetes). Experience with scripting languages (Python, Bash). Excellent problem-solving and analytical skills. Strong communication and collaboration skills. Ability to work independently and as part of a team. Good to have: Experience with cloud platforms (AWS, Azure, GCP). Experience with Infrastructure as Code (IaC) tools (Terraform, Ansible). Experience with security best practices for DevOps pipelines.
Posted 3 days ago
5.0 - 7.0 years
20 - 25 Lacs
pune, chennai, bengaluru
Work from Office
Python AI development (Flask/Fast API) Development experience using Microservices based architecture Knowledge of Google Cloud Platform (GCP) or other cloud environments. Familiarity with containerization and orchestration technologies such as Docker. Experience with databases such as MySQL and search technologies like ElasticSearch. Experience working with Queue-based systems (e.g., RabbitMQ, Kafka) is a plus. Location : - Bengaluru,Chennai,Pune,Noida,Mumbai,Hyderabad,Kochi
Posted 3 days ago
2.0 - 5.0 years
5 - 7 Lacs
noida
Work from Office
Key Responsibilities: Develop and maintain scalable full-stack applications using Java, Spring Boot, and Angular for building rich UI screens and custom/reusable components. Design and implement cloud-based solutions leveraging Google Cloud Platform (GCP) services such as BigQuery, Google Cloud Storage, Cloud Run, and PubSub. Manage and optimize CI/CD pipelines using Tekton to ensure smooth and efficient development workflows. Deploy and manage Google Cloud services using Terraform, ensuring infrastructure as code principles. Mentor and guide junior software engineers, fostering professional development and promoting systemic change across the development team. Collaborate with cross-functional teams to design, build, and maintain efficient, reusable, and reliable code. Drive best practices and improvements in software engineering processes, including coding standards, testing, and deployment strategies. Required Skills: Java/Spring Boot (5+ years): In-depth experience in developing backend services and APIs using Java and Spring Boot. Angular (3+ years): Proven ability to build rich, dynamic user interfaces and custom/reusable components using Angular. Google Cloud Platform (2+ years): Hands-on experience with GCP services like BigQuery, Google Cloud Storage, Cloud Run, and PubSub. CI/CD Pipelines (2+ years): Experience with tools like Tekton for automating build and deployment processes. Terraform (1-2 years): Experience in deploying and managing GCP services using Terraform. J2EE (5+ years): Strong experience in Java Enterprise Edition for building large-scale applications. Experience mentoring and delivering organizational change within a software development team.
Posted 3 days ago
1.0 - 2.0 years
3 - 6 Lacs
dhule
Work from Office
Google Cloud Platform o GCS, DataProc, Big Query, Data Flow Programming Languages o Java, Scripting Languages like Python, Shell Script, SQL Google Cloud Platform o GCS, DataProc, Big Query, Data Flow 5+ years of experience in IT application delivery with proven experience in agile development methodologies 1 to 2 years of experience in Google Cloud Platform (GCS, DataProc, Big Query, Composer, Data Processing like Data Flow)
Posted 3 days ago
5.0 - 7.0 years
0 Lacs
hyderabad, telangana, india
On-site
Role: Senior Data Engineer Location: Hyderabad Experience: 5-7 Years Role Proficiency: This role requires proficiency in developing data pipelines including coding and testing for ingesting wrangling transforming and joining data from various sources. The ideal candidate should be adept in ETL tools like Informatica Glue Databricks and DataProc with strong coding skills in Python PySpark and SQL. This position demands independence and proficiency across various data domains. Expertise in data warehousing solutions such as Snowflake BigQuery Lakehouse and Delta Lake is essential including the ability to calculate processing costs and address performance issues. A solid understanding of DevOps and infrastructure needs is also required. Outcomes: Act creatively to develop pipelines/applications by selecting appropriate technical options optimizing application development maintenance and performance through design patterns and reusing proven solutions. Support the Project Manager in day-to-day project execution and account for the developmental activities of others. Interpret requirements create optimal architecture and design solutions in accordance with specifications. Document and communicate milestones/stages for end-to-end delivery. Code using best standards debug and test solutions to ensure best-in-class quality. Tune performance of code and align it with the appropriate infrastructure understanding cost implications of licenses and infrastructure. Create data schemas and models effectively. Develop and manage data storage solutions including relational databases NoSQL databases Delta Lakes and data lakes. Validate results with user representatives integrating the overall solution. Influence and enhance customer satisfaction and employee engagement within project teams. Measures of Outcomes: TeamOne's Adherence to engineering processes and standards TeamOne's Adherence to schedule / timelines TeamOne's Adhere to SLAs where applicable TeamOne's of defects post delivery TeamOne's of non-compliance issues TeamOne's Reduction of reoccurrence of known defects TeamOne's Quickly turnaround production bugs Completion of applicable technical/domain certifications Completion of all mandatory training requirementst Efficiency improvements in data pipelines (e.g. reduced resource consumption faster run times). TeamOne's Average time to detect respond to and resolve pipeline failures or data issues. TeamOne's Number of data security incidents or compliance breaches. Outputs Expected: Code: Develop data processing code with guidance ensuring performance and scalability requirements are met. Define coding standards templates and checklists. Review code for team and peers. Documentation: Create/review templates checklists guidelines and standards for design/process/development. Create/review deliverable documents including design documents architecture documents infra costing business requirements source-target mappings test cases and results. Configure: Define and govern the configuration management plan. Ensure compliance from the team. Test: Review/create unit test cases scenarios and execution. Review test plans and strategies created by the testing team. Provide clarifications to the testing team. Domain Relevance: Advise data engineers on the design and development of features and components leveraging a deeper understanding of business needs. Learn more about the customer domain and identify opportunities to add value. Complete relevant domain certifications. Manage Project: Support the Project Manager with project inputs. Provide inputs on project plans or sprints as needed. Manage the delivery of modules. Manage Defects: Perform defect root cause analysis (RCA) and mitigation. Identify defect trends and implement proactive measures to improve quality. Estimate: Create and provide input for effort and size estimation and plan resources for projects. Manage Knowledge: Consume and contribute to project-related documents SharePoint libraries and client universities. Review reusable documents created by the team. Release: Execute and monitor the release process. Design: Contribute to the creation of design (HLD LLD SAD)/architecture for applications business components and data models. Interface with Customer: Clarify requirements and provide guidance to the Development Team. Present design options to customers. Conduct product demos. Collaborate closely with customer architects to finalize designs. Manage Team: Set FAST goals and provide feedback. Understand team members' aspirations and provide guidance and opportunities. Ensure team members are upskilled. Engage the team in projects. Proactively identify attrition risks and collaborate with BSE on retention measures. Certifications: Obtain relevant domain and technology certifications. Skill Examples: Proficiency in SQL Python or other programming languages used for data manipulation. Experience with ETL tools such as Apache Airflow Talend Informatica AWS Glue Dataproc and Azure ADF. Hands-on experience with cloud platforms like AWS Azure or Google Cloud particularly with data-related services (e.g. AWS Glue BigQuery). Conduct tests on data pipelines and evaluate results against data quality and performance specifications. Experience in performance tuning. Experience in data warehouse design and cost improvements. Apply and optimize data models for efficient storage retrieval and processing of large datasets. Communicate and explain design/development aspects to customers. Estimate time and resource requirements for developing/debugging features/components. Participate in RFP responses and solutioning. Mentor team members and guide them in relevant upskilling and certification. Knowledge Examples: Knowledge Examples Knowledge of various ETL services used by cloud providers including Apache PySpark AWS Glue GCP DataProc/Dataflow Azure ADF and ADLF. Proficient in SQL for analytics and windowing functions. Understanding of data schemas and models. Familiarity with domain-related data. Knowledge of data warehouse optimization techniques. Understanding of data security concepts. Awareness of patterns frameworks and automation practices. Additional Comments: Key Responsibilities: • Design, build, and maintain scalable, cloud-native data pipelines using AWS. • Develop ETL/ELT workflows using Glue, Pyspark , and Python • Optimize data modeling, partitioning, and querying strategies • Skilled in reading and developing Terraform-based infrastructure code Qualifications • Bachelor's degree in computer science, Engineering • 5 to 7 years of experience in data engineering. • Strong expertise in cloud development, particularly with AWS services such as Aurora PostgreSQL RDS, Glue, S3, IAM, EC2, Lambda. • Hands-on experience and implementing best practices with AWS Glue, Glue data catalog and pyspark • Proficient in PySpark and Python and expertise in SQL. • Experience in AWS Data lake, Apache Iceberg and Lake Formation. • Understanding of data modelling. • Strong problem-solving and communication skills. • Experience working in agile, multi-project environments. • Experience with CI/CD pipelines and DevOps practices. Skills Python, Aws, Glue
Posted 3 days ago
3.0 years
0 Lacs
hyderabad, telangana, india
On-site
About Deutsche Börse Group Headquartered in Frankfurt, Germany, we are a leading international exchange organization and market infrastructure provider. We empower investors, financial institutions, and companies by facilitating access to global capital markets. Our business areas cover the entire financial market transaction process chain, including trading, clearing, settlement and custody, digital assets and crypto, market analytics, and advanced electronic systems. As a technology-driven company, we develop and operate cutting-edge IT solutions globally. About Deutsche Börse Group in India Our presence in Hyderabad serves as a key strategic hub, comprising India’s top-tier tech talent. We focus on crafting advanced IT solutions that elevate market infrastructure and services. Together with our colleagues from across the globe, we are a team of highly skilled capital market engineers forming the backbone of financial markets worldwide. We harness the power of innovation in leading technology to create trust in the markets of today and tomorrow. Senior Cloud Operations Engineer – Google Cloud Division / Section: DBAG, CIO, CTO, Chief Cloud Officer / Core Infrastructure Field of Activity: The Chief Technology Officer (CTO) area is at the heart of the Information Technology division of Deutsche Börse Group. The CTO area develops and operates the group-wide IT infrastructure (network, data centers and Cloud), the Group Data & Advanced Analytics and the Enterprise Architecture, and therefore significantly contributes to Deutsche Börse Group overall IT strategy. The CTO area drives digital transformation and leverages innovation while keeping production operations stable. Within the CTO Area, the Cloud / Core Infrastructure department enables the usage of public cloud services from Microsoft and Google and provides core infrastructure services such as data center management and non-trading network operations. The department develops and maintains cloud platforms for GCP and Azure, which allows the other IT departments (Product ITs) to make use of cloud in a safe and efficient manner. To strengthen our team, we are looking for a motivated Software engineer Cloud CoE (f/m/d) to take responsibility in an interesting and dynamic, international environment. As a Support Specialist within the Cloud CoE, you will play a crucial role in processing, ensuring smooth operations, and providing essential support to our development community in a highly regulated capital markets environment. Tasks / Responsibilities: Handle and resolve questions and service requests from employees via the Jira ticket system, ensuring timely and effective responses. Provide technical assistance and support for incoming queries and issues related to the cloud platform, including troubleshooting and problem resolution. Develop and maintain documentation, FAQs, and knowledge base articles to help users resolve common issues independently. Work closely with other team members, IT departments, and cloud service providers to address and resolve complex issues. Continuously monitor the cloud platform's performance and health, identifying and addressing potential issues before they impact users. Assist in training and onboarding new employees on the use of the cloud platform and related tools. Generate and analyze reports on support activities, ticket trends, and system performance to provide insights and recommendations for improvement. Qualifications / Required skills: BE / B.Tech / Any Master's Degree related to Information Technology, Cyber Security 3 years of hands-on experience with Google Cloud incl. relevant certifications 1 years of work experience with infrastructure-as-code and Terraform or similar automation tools 1 years of working experience with GitHub, Jira or similar Highly organized, and strong communication capabilities and team and customer orientation Proficiency in written and spoken English (at least level B1)
Posted 3 days ago
10.0 years
0 Lacs
india
Remote
Overview CoinTracker makes cryptocurrency portfolio tracking and tax compliance simple. CoinTracker enables consumers and businesses to seamlessly track their cryptocurrency portfolio, investment performance, taxes, and more. We are a globally distributed team on a mission to enable everyone in the world to use crypto with peace of mind. Learn more about our mission, culture, and hiring process. Some things we’re proud of 🛠️ Building foundational tools in the cryptocurrency space 📄 Over 1M tax forms generated 💲 $250B+ in cryptocurrency is tracked on CoinTracker (~over 5% of the entire crypto market) 🤝 Partnered with Coinbase, H&R Block, Intuit TurboTax, MetaMask, OpenSea, Phantom, Solana, and Uniswap 🗺️ Founders: Jon previously built TextNow (200M downloads), Chandan was previously a product manager at Google & Google[x] 💼 $100M+ venture capital raised from Accel, General Catalyst, Y Combinator, Initialized Capital, Coinbase Ventures, Kraken Ventures, Intuit Ventures, 776 Ventures, Balaji Srinivasan, Claire Hughes Johnson, Gokul Rajaram, Serena Williams, Zach Perret 🌴 Awesome benefits Your mission Join our close-knit, early-stage distributed team, where we tackle exciting technical challenges and create transformative crypto products that give people peace of mind. What You Will Do You will be a part of the newly formed Integration Expansion Team that works closely with our Integration engineering team. You’ll own and deliver new Integrations (blockchains & exchanges) by using our existing Integrations platform system to scale our Integrations coverage and assist in troubleshooting critical Integrations related issues. Collaborate with engineering, customer experience & product teams within Cointracker. Participate in hiring efforts to help scale the Integration Expansion Team. What We Look For We are hiring experienced backend software engineers with 10+ years of non-internship experience to help build and scale our new Integrations Expansion team. 2+ years of experience as a tech lead or team lead. Have strong CS and system design fundamentals, write high-quality code, value software testing, and uphold best practices in engineering. Strong Python knowledge & working with third party API’s is preferred. Familiarity with AWS or GCP & cloud fundamentals is preferred. Drawn to an early-stage, high-growth startup environment. Ability to read blockchain data / understanding of web3 strongly preferred. Background in data engineering domain and closely working with customer support team is a plus. Able to work effectively in a remote setting and able to overlap with our core hours of 9 AM to 12 PM Pacific Timezone. Our engineering process includes Code reviews Continuous integration Multiple daily automated deployments to production Automated testing with >85% code coverage Our tech stack is Web: HTML, Typescript, React, React Native, Styled-Components Mobile: React Native, Expo, GraphQL Backend: Python, Flask, GraphQL, Postgres, BigTable, Redis, Python RQ Infrastructure: GCP, Temporal, Terraform, PostgreSQL, Docker, Pub/Sub, Datadog, PagerDuty You don’t need to know any or all of these, but be willing to learn!
Posted 3 days ago
8.0 years
0 Lacs
india
Remote
This role is for one of Weekday's clients Min Experience: 8 years Location: Remote (India) JobType: full-time Requirements What You'll Be Working On AI Assistant & Agent Systems Agent Architecture & Implementation: Build sophisticated multi-agent systems that can reason, plan, and execute complex sales workflows Context Management: Develop systems that maintain conversational context across complex multi-turn interactions LLM and Agentic Platforms: Build scalable large language model and agentic platforms that enable widespread adoption and viability of agent development within the Apollo ecosystem Backend Systems: Build back-end systems necessary to support the agents AI features: Conversational AI, Natural Language Search, Personalized Email Generation and similar AI features Classical AI/ML (Optional Focus) Search Scoring & Ranking: Develop and improve recommendation systems and search relevance algorithms Entity Extraction: Build models for automatic company keywords, people keywords, and industry classification Lookalike & Recommendation Systems: Create intelligent matching and suggestion engines Key Responsibilities Design and Deploy Production LLM Systems: Build scalable, reliable AI systems that serve millions of users with high availability and performance requirements Agent Development: Create sophisticated AI agents that can chain multiple LLM calls, integrate with external APIs, and maintain state across complex workflows Prompt Engineering Excellence: Develop and optimize prompting strategies, understand trade-offs between prompt engineering vs fine-tuning, and implement advanced prompting techniques System Integration: Build robust APIs and integrate AI capabilities with existing Apollo infrastructure and external services Evaluation & Quality Assurance: Implement comprehensive evaluation frameworks, devising A/B experiments, and monitoring systems to ensure AI systems meet accuracy, safety, and reliability standards Performance Optimization: Optimize for cost, latency, and scalability across different LLM providers and deployment scenarios Cross-functional Collaboration: Work closely with product teams, backend engineers, and stakeholders to translate business requirements into technical AI solutions Required Qualifications Core AI/LLM Experience (Must-Have) 8+ years of software engineering experience with a focus on production systems 1.5+ years of hands-on LLM experience (2023-present) building real applications with GPT, Claude, Llama, or other modern LLMs Production LLM Applications: Demonstrated experience building customer-facing, scalable LLM-powered products with real user usage (not just POCs or internal tools) Agent Development: Experience building multi-step AI agents, LLM chaining, and complex workflow automation Prompt Engineering Expertise: Deep understanding of prompting strategies, few-shot learning, chain-of-thought reasoning, and prompt optimization techniques Technical Engineering Skills Python Proficiency: Expert-level Python skills for production AI systems Backend Engineering: Strong experience building scalable backend systems, APIs, and distributed architectures LangChain or Similar Frameworks: Experience with LangChain, LlamaIndex, or other LLM application frameworks API Integration: Proven ability to integrate multiple APIs and services to create advanced AI capabilities Production Deployment: Experience deploying and managing AI models in cloud environments (AWS, GCP, Azure) Quality & Evaluation Focus Testing & Evaluation: Experience implementing rigorous evaluation frameworks for LLM systems including accuracy, safety, and performance metrics A/B Testing: Understanding of experimental design for AI system optimization Monitoring & Reliability: Experience with production monitoring, alerting, and debugging complex AI systems Data Pipeline Management: Experience building and maintaining scalable data pipelines that power AI systems What Makes a Great Candidate Production-First Mindset You've built AI systems that real users depend on, not just demos or research projects You understand the difference between a working prototype and a production-ready system You have experience with user feedback, iterative improvements, and feedback systems Technical Depth with Business Impact You can design end-to-end systems, including back-end systems, asynchronous workflows, LLMs, and agentic systems You understand the cost-benefit trade-offs of different AI approaches You've made decisions about when to use different LLM providers, fine-tuning vs prompting, and architecture choices Evaluation & Quality Excellence You implement repeatable, quantifiable evaluation methodologies You track performance across iterations and can explain what makes systems successful You prioritize safety, reliability, and user experience alongside capability Adaptability & Learning You stay current with the rapidly evolving LLM landscape You can quickly adapt to new models, frameworks, and techniques You're comfortable working in ambiguous problem spaces and breaking down complex challenges
Posted 3 days ago
3.0 years
0 Lacs
india
On-site
Job Title: QA Engineer – Generative AI Testing Location: [Hybrid – Hyderabad/Vishakhapatnam] Job Type: [Full-time] Job Summary: We are seeking a skilled and detail-oriented QA Engineer with expertise in Generative AI Testing to join our team. In this role, you will be responsible for ensuring the accuracy, safety, and performance of AI-driven applications, particularly in the field of Generative AI. You will design and execute test strategies, develop automation frameworks, and validate AI outputs to maintain high-quality user experiences. Key Responsibilities: Design, develop, and execute comprehensive test plans for Generative AI models and applications. Validate AI-generated outputs for accuracy, coherence, bias, ethical considerations, and alignment with business requirements. Develop and maintain automated testing frameworks for AI-based applications, ensuring scalability and efficiency. Perform adversarial testing, edge case analysis, and security testing to assess AI vulnerabilities. Collaborate with AI/ML engineers, product managers, and developers to refine model performance and mitigate errors. Monitor AI model drift and ensure consistent performance across different inputs and datasets. Conduct performance testing to evaluate response times, scalability, and reliability of AI systems. Document test cases, test results, and defects while ensuring compliance with AI governance and ethical guidelines. Continuously explore new tools and methodologies to improve AI testing processes. Requirements: Bachelors or Master’s degree in Computer Science, Engineering, or a related field. 3+ years of experience in Quality Assurance , with at least 1-2 years in AI/ML testing or Generative AI applications. Strong understanding of AI/ML concepts, NLP models, LLMs, and deep learning architectures. Experience with AI testing tools such as LangTest, DeepChecks, Trulens, or custom AI evaluation frameworks . Proficiency in test automation using Python, Selenium, PyTest, or similar frameworks . Familiarity with model evaluation metrics such as BLEU, ROUGE, perplexity, and precision-recall for AI-generated content. Knowledge of bias detection, adversarial testing, and ethical AI considerations. Experience working with APIs, cloud platforms (AWS, Azure, GCP), and MLOps practices. Strong analytical and problem-solving skills with attention to detail. Excellent communication and collaboration skills to work in cross-functional teams. Nice to Have: Experience in testing AI chatbots, voice assistants, or image/video-generating AI. Knowledge of LLM fine-tuning and reinforcement learning from human feedback (RLHF). Exposure to regulatory and compliance frameworks for AI governance. Why Join Us? Opportunity to work on cutting-edge AI products. Collaborate with a team of AI and software experts. Competitive salary, benefits, and career growth opportunities. If you’re interested, Please share your resume with Dkadam@eprosoft.com
Posted 3 days ago
0 years
0 Lacs
hyderabad, telangana, india
On-site
About Deutsche Börse Group Headquartered in Frankfurt, Germany, we are a leading international exchange organization and market infrastructure provider. We empower investors, financial institutions, and companies by facilitating access to global capital markets. Our business areas cover the entire financial market transaction process chain, including trading, clearing, settlement and custody, digital assets and crypto, market analytics, and advanced electronic systems. As a technology-driven company, we develop and operate cutting-edge IT solutions globally. About Deutsche Börse Group in India Our presence in Hyderabad serves as a key strategic hub, comprising India’s top-tier tech talent. We focus on crafting advanced IT solutions that elevate market infrastructure and services. Together with our colleagues from across the globe, we are a team of highly skilled capital market engineers forming the backbone of financial markets worldwide. We harness the power of innovation in leading technology to create trust in the markets of today and tomorrow. Senior Power BI Analyst Division Deutsche Börse AG, Chief Information Officer/Chief Operating Officer (CIO/COO), Chief Technology Officer (CTO), Plan & Control Field of activity: The Deutsche Börse CTO develops and runs the groupwide Information Technology (IT) infrastructure, develops and operates innovative IT products and offers services to the rest of the Group upon which they can build. The CTO area plays a significant role in the achieving the Group’s strategic goals by leading transformation and supporting a stable operating environment. The Plan & Control unit supplies reliable management information to the CTO and enables the other delivery units within the area to focus on their core activities by supplying central administration and coordination within the area. The successful candidate will support the Plan & Control unit in carrying out its responsibilities. Your area of work: The Deutsche Börse CTO area develops and runs the groupwide Information Communication Technology (ICT) infrastructure, develops and operates innovative IT products and offers services to the rest of the Group upon which they can build. The CTO area plays a significant role in achieving the Group’s strategic goals by leading transformation and supporting a stable operating environment. The Plan & Control unit supplies reliable management information to the CTO and enables the other delivery units within the area to focus on their core activities by supplying central administration and coordination within the area. The successful candidate will support the Plan & Control unit in carrying out its responsibilities. Your responsibilities: Design and develop BI solutions: Translate business requirements into technical specifications for BI reports, dashboards, and analytical tools, ensuring alignment with overall data architecture and governance. Implement and maintain BI infrastructure: Oversee the implementation, configuration, and ongoing maintenance of data pipelines ensuring system stability, performance, and security. Conduct data analysis and validation: Perform rigorous data analysis to identify trends, patterns, and insights, validating data accuracy, completeness, and consistency across different sources. Develop and execute test plans: Create comprehensive test plans and test cases for BI solutions, ensuring data quality, report accuracy, and functionality across various scenarios and user groups. Collaborate with stakeholders: Work closely with business units, IT teams, and data governance teams to gather requirements, provide support, and ensure effective communication and collaboration throughout the BI development lifecycle. Document and train: Develop comprehensive documentation for BI solutions, including user manuals, technical specifications, and training materials for end-users and support teams. Support the collection, consolidation, analysis and reporting of key performance indicators from across Deutsche Börse Group. Your profile: Power BI Desktop Proficiency: Mastery in data modeling, creating relationships between tables, using DAX for calculated columns and measures, building interactive visualizations, and designing reports and dashboards. Data Source Connectivity: Experience connecting to various data sources, including databases (SQL Server, Oracle, etc.), cloud platforms (Azure, GCP), flat files (CSV, Excel), and APIs. ETL/Data Wrangling: Skills in data transformation and cleaning is crucial. DAX (Data Analysis Expressions): Demonstrable expertise in writing complex DAX expressions for calculations, aggregations, and filtering data is essential. Problem-Solving: Ability to troubleshoot issues, identify root causes, and implement solutions related to data quality, report performance, or other BI-related challenges. Communication: Excellent written and verbal communication skills to effectively interact with technical and non-technical stakeholders. Ability to explain complex technical concepts in a clear and concise manner. Collaboration: Ability to work effectively in a team environment and collaborate with other developers, business analysts, and end-users. Time Management and Prioritization: Ability to manage multiple tasks and prioritize workload effectively to meet deadlines. Expertise working with office applications (Word, SharePoint, Excel, etc.). Proficiency in written and spoken English, German skills a benefit. A relevant degree, or equivalent, in business, business administration, finance, accounting, communications or IT.
Posted 3 days ago
6.0 - 11.0 years
30 - 35 Lacs
pune
Work from Office
Job Title: Technology Service Specialist, AVP Location: Pune, India Role Description Technology underpins Deutsche Banks entire business and is changing and shaping the way we engage, interact and transact with all our stakeholders, both internally and externally. Our Technology, Data and Innovation (TDI) strategy is focused on strengthening engineering expertise, introducing an agile delivery model, as well as modernizing the bank's IT infrastructure with long-term investments and taking advantage of cloud computing. But this is only the foundation. We continue to invest and build a team of visionary tech talent who will ensure we thrive in this period of unprecedented change for the industry. It means hiring the right people and giving them the training, freedom, and opportunity, they need to do pioneering work. Technology Infrastructure - End User Computing We are a 576 strong global organisation that designs, delivers and supports technology products at the core of the workplace environment, which directly impact user experience and enable productivity and collaboration. The goal of Developer Tooling is to provide enterprise development tools as services for teams across Deutsche Bank, enabling them to reach higher levels of maturity in their process. These servicesconsist of all necessary to support teams from initial Program/Project investment governance decisions and subsequently management through development, testing, deployment as well as compliance with the Banks software processes. Tools such as Bitbucket, JIRA, Confluence, TeamCity, Artifactory, MF ALM. The role comprisesengineering expertise in design, deployment and maintenance ofDTtools,SME of the application in terms of operational knowledge as well as requirements. Workingwith theproduct owner,level 2/3 teams, vendor contacts and the engineering team to handle daily operational tasks andlong-termprojects. Typical operational tasks may include designing and implementing automation for application deployments, application upgrades, application & data migrations. Also understanding and adherence with bank policy and standards related to security and production release controls and working with the rest of the SDLC team to ensure full compliance. This is a management role of engineering feature teams, the lead will work closely with the Product Manager and Product Owner to ensure efficient delivery, clear requirement setting and goals and a confirmed book of work. The Lead will ensure appropriate access is available for the engineering team. Additionally, this role may perform the IT Asset Owner (ITAO) duties across the suite of applications Your key responsibilities Work with our clients to deliver value through the delivery of high-quality software within an agile development lifecycle. Define and evolve the architecture of the components you are working on and contribute to architectural decisions at a department and bank-wide level. Bring deep industry knowledge into the Feature Team to understand problems, leverage design patterns, automation to support CI and CD pipeline to production and support emergent design within the agreed domain target architecture. Contribute to the wider domain goals to ensure flow, consistent standards and approach to software development while designing a common shared framework. Work with the right and robust engineering practices. Building functionalities using latest technologies Management of the Engineering team Working with Product Manager and Product Owner on architectural roadmap Ensuring quality and timely delivery against demand Your skills and experience Must have comprehensive understanding DevOps and DevOps tools chain in GCP like cloud platforms. Very good understanding of SOLID, DRY, KISS practices as well as pair programming. Must have exposure to hands-on software development using any modern programming languages like Spring, Java along with UI Framework like Angular etc and architecture concepts like micro-services and containers. A fullstack developer experience will be a plus. Must have experience to large Solution Designs, Sizing, Product configuration and consulting experience for High-Level and Low-Level Solution Architecture Excellent understanding of database architecture, particularly Oracle, Postgres, MongoDB Comprehensive understanding and hands-on of Docker/ Kubernetes/ Helm Experience with Test Driven Development (TDD) and Behavior Driven Development (BDD)
Posted 3 days ago
5.0 - 7.0 years
12 - 17 Lacs
chennai, bengaluru
Work from Office
Job Summary Synechron is seeking a skilled API Automation Engineer to join our dynamic testing team. In this role, you will be responsible for designing, developing, and executing automation solutions to ensure the robustness and reliability of APIs within our financial services domain. Your expertise will help establish comprehensive testing strategies, reduce defects, and support the delivery of high-quality software solutions aligned with business objectives. This role offers an opportunity to contribute to critical financial trading, margin, and wealth management systems, enhancing client satisfaction and operational efficiency. Software Requirements Required: Java (Version 8 or higher) Python (Version 3.x) Test automation frameworks such as REST-assured, pytest, or similar Version control tools (Git/GitHub) CI/CD tools (Jenkins, CircleCI, or equivalent) Preferred: API management tools (Postman, Swagger/OpenAPI) Test management tools (JIRA, TestRail) Containerization platforms (Docker, Kubernetes) Overall Responsibilities Develop, implement, and maintain automated API test scripts based on functional and non-functional requirements. Design end-to-end testing strategies to maximize automation coverage and efficiency. Establish guardrails and validation checkpoints to detect, diagnose, and prevent software defects early in the development lifecycle. Collaborate closely with developers, business analysts, and QA teams to align testing approaches with project goals. Maintain comprehensive documentation of testing processes, strategies, and results. Regularly review and optimize test cases for performance, accuracy, and reusability. Contribute to the continuous improvement of testing practices and automation frameworks. Provide insights and reports on testing progress, defect trends, and risk assessments to stakeholders. Technical Skills (By Category) Programming Languages Required: Java, Python Preferred: JavaScript, Bash scripting Databases/Data Management Required: Basic understanding of SQL databases (e.g., Oracle, MySQL, or similar) for test data management Preferred: Experience with NoSQL databases (e.g., MongoDB) Cloud Technologies Required: Familiarity with cloud concepts (e.g., AWS, Azure, or GCP) as applicable to API testing environments Preferred: Hands-on experience with cloud-based testing or deployment environments Frameworks and Libraries Required: REST-assured, pytest, or equivalent API testing libraries Preferred: Selenium, Postman, or other API tooling integrations Development Tools and Methodologies Required: Agile/Scrum, Test-driven development (TDD), Behavior-driven development (BDD) frameworks Preferred: Container orchestration tools (e.g., Docker, Kubernetes) Security Protocols Preferred: Basic understanding of API security standards such as OAuth2, JWT, and TLS/SSL Experience Requirements 5-7 years of professional experience in software testing and automation, specifically with API testing. Proven track record establishing end-to-end test automation strategies in financial or trading environments. Demonstrated understanding of financial domain processes such as trading, margin calculations, and wealth management systems. Experience in defect prevention via guardrails, validations, and quality gates. Alternative pathways: Candidates with extensive automation experience in related industries or with demonstrable skills in financial API testing may be considered. Day-to-Day Activities Writing and executing automated API test cases and validating responses against expected outcomes. Participating in daily stand-ups, sprint planning, and retrospective meetings to coordinate testing efforts. Collaborating with development teams to incorporate automation into CI/CD pipelines. Analyzing test results, identifying root causes of failures, and documenting defects. Monitoring and updating testing frameworks to adapt to system changes. Providing regular updates to project stakeholders on testing progress and quality metrics. Contributing to process improvements in test strategy, tooling, and test data management. Qualifications Bachelors degree in Computer Science, Information Technology, Engineering, or equivalent technical discipline. Certifications such as ISTQB, Certified Agile Tester, or relevant automation/platform-specific credentials are preferred. Prior training or certifications in API security and cloud services would be advantageous. Commitment to continuous learning and professional development in automation technologies and financial domain knowledge. Professional Competencies Strong analytical and problem-solving skills, with the ability to troubleshoot complex API issues. Effective communication skills for stakeholder engagement and cross-team collaboration. Capable of working independently and managing multiple priorities in a fast-paced environment. Team-oriented mindset with collaborative problem-solving abilities. Adaptability to evolving technologies, tools, and project requirements. Demonstrates initiative in identifying areas for process improvement and automation opportunities. Location - Bengaluru,Chennai,Hinjewadi,Pune
Posted 3 days ago
3.0 - 5.0 years
0 Lacs
hyderabad, telangana, india
On-site
About McDonald’s: One of the world’s largest employers with locations in more than 100 countries, McDonald’s Corporation has corporate opportunities in Hyderabad. Our global offices serve as dynamic innovation and operations hubs, designed to expand McDonald's global talent base and in-house expertise. Our new office in Hyderabad will bring together knowledge across business, technology, analytics, and AI, accelerating our ability to deliver impactful solutions for the business and our customers across the globe. Position Summary: Data Governance Tooling & Lifecycle Mgmt. Engineering Support: (Supervisor, Data Operations & Management) As the Manager of Data Governance Tooling & Lifecycle Management Engineering Support, you will play a key role in implementing, maintaining, and optimizing enterprise data governance tools and lifecycle automation processes. This hands-on role supports metadata management, policy execution, and data lifecycle tracking across cloud-native platforms including Google Cloud (BigQuery) and AWS (Redshift). You’ll work closely with governance, engineering, and compliance teams to ensure data is cataloged, classified, accessible, and managed throughout its lifecycle. Who we are looking for: Primary Responsibilities: Governance Tooling Implementation & Support Implement and manage data governance platforms such as Collibra, including configuration, workflow automation, integrations, and user management. Maintain metadata harvesting, classification, and cataloging across cloud environments. Ensure accurate population of business and technical metadata, including lineage and stewardship assignments. Lifecycle Management Automation: Engineer and support lifecycle governance for data assets—from creation to archival—across GCP and AWS. Develop automation scripts and pipelines to enforce data retention, purging, and archival policies. Collaborate with infrastructure teams to apply lifecycle rules across storage and warehouse systems. Metadata & Integration Enablement: Integrate governance tooling with cloud-native platforms like Big Query, Redshift, GCS, and S3 to maintain real-time visibility into data usage and quality. Support lineage capture across pipelines and systems, including orchestration tools (e.g., Airflow, Cloud Composer). Align metadata models with organizational taxonomy and business glossaries. Policy Execution & Compliance Support: Implement automated policy rules related to data classification, access control, and privacy. Ensure tooling compliance with internal governance standards and external regulatory requirements (e.g., GDPR, HIPAA, CCPA). Support audit processes by maintaining accurate lineage, ownership, and policy enforcement records. Collaboration & Documentation: Work with data stewards, engineers, and architects to support governance onboarding and issue resolution. Maintain documentation and training materials for platform users and governance workflows. Provide insights and recommendations for tooling improvements and scaling support across domains. Skill: 3 to 5 years of experience in data governance engineering, metadata management, or platform operations roles. Strong hands-on experience with: Data governance platforms (e.g., Collibra, Alation, Informatica) Cloud data platforms: GCP (Big Query, GCS) / AWS (Redshift, S3) SQL and Python for metadata extraction, pipeline integration, and automation API integrations between governance tools and cloud platforms Knowledge of data classification frameworks, retention policies, and regulatory compliance standards. Bachelor’s degree in Computer Science, Data Management, Information Systems, or a related field. Preferred Experience: Experience supporting Retail or QSR data environments with complex, multi-market governance needs. Exposure to CI/CD processes, Terraform / IaC, or cloud-native infrastructure tooling for lifecycle governance automation. Familiarity with data mesh concepts and distributed stewardship operating models. Current GCP Associates (or Professional) Certification. Work location: Hyderabad, India Work pattern: Full time role. Work mode: Hybrid. Additional Information: McDonald’s is committed to providing qualified individuals with disabilities with reasonable accommodations to perform the essential functions of their jobs. McDonald’s provides equal employment opportunities to all employees and applicants for employment and prohibits discrimination and harassment of any type without regard to sex, sex stereotyping, pregnancy (including pregnancy, childbirth, and medical conditions related to pregnancy, childbirth, or breastfeeding), race, color, religion, ancestry or national origin, age, disability status, medical condition, marital status, sexual orientation, gender, gender identity, gender expression, transgender status, protected military or veteran status, citizenship status, genetic information, or any other characteristic protected by federal, state or local laws. This policy applies to all terms and conditions of employment, including recruiting, hiring, placement, promotion, termination, layoff, recall, transfer, leaves of absence, compensation and training. McDonald’s Capability Center India Private Limited (“McDonald’s in India”) is a proud equal opportunity employer and is committed to hiring a diverse workforce and sustaining an inclusive culture. At McDonald’s in India, employment decisions are based on merit, job requirements, and business needs, and all qualified candidates are considered for employment. McDonald’s in India does not discriminate based on race, religion, colour, age, gender, marital status, nationality, ethnic origin, sexual orientation, political affiliation, veteran status, disability status, medical history, parental status, genetic information, or any other basis protected under state or local laws. Nothing in this job posting or description should be construed as an offer or guarantee of employment.
Posted 3 days ago
5.0 - 7.0 years
20 - 22 Lacs
noida
Hybrid
The Role As a Senior DevOps Engineer, you will be a key player in designing, implementing, and maintaining our cloud infrastructure and CI/CD pipelines. You will report directly to the Head of DevOps and work closely with development and operations teams to automate processes, improve system performance, and ensure the security and stability of our applications. Responsibilities - Design, implement, and maintain scalable, secure, and highly available cloud infrastructure. - Develop and manage CI/CD pipelines to automate software delivery and deployment. - Implement and manage monitoring, logging, and alerting solutions to ensure system health and performance. - Collaborate with development teams to optimize application performance and troubleshoot production issues. - Implement and enforce security best practices across our infrastructure and applications. - Automate operational tasks and processes to improve efficiency and reduce manual effort. - Participate in future on-call rotations for critical incidents and provide timely resolution. - Mentor junior DevOps engineers and contribute to the growth of the team. - Stay up-to-date with emerging technologies and industry best practices. Qualifications - Bachelor's degree in Computer Science, Engineering, or a related field, or equivalent practical experience. - 5+ years of experience in a DevOps or SRE role. - Strong experience with GCP. - Proficiency in scripting languages (e.g., Python, Bash, Javascript). - Extensive experience with CI/CD tool GitHub Actions. - Solid understanding of containerization technologies (e.g., Docker, Kubernetes). - Experience with infrastructure as code tool Terraform. Bonus for being able to build customer providers from scratch. - Familiarity with monitoring and logging tools Prometheus, new relic - Strong understanding of networking concepts and security principles. - Excellent problem-solving, communication, and collaboration skills. Preferred Qualifications - Certifications in relevant cloud platforms - Experience with microservices architecture. - Knowledge of database administration (e.g., SQL, NoSQL). - Experience with performance tuning and optimization. Why Us? Besides building an amazing company, we also aim to create the company wed love to work for, and it starts with defining our core values: Community - We are a tight-knit team. We help each other, grow together, and win together. Balance - Live your full life, be fulfilled at work, but not at your lifes expense. Ownership - Regardless of what you do, own your work and be proud to stand behind it. Have fun - Who says building a new product needs to be a serious affair - The only thing serious is how awesome it is going to be. We believe our people are our greatest strength. Were committed to fostering a culture of innovation, collaboration, and continuous growth where talented professionals can thrive and shape the future of technology. To Apply: Please share resumes and cover letter to hiring@alyssum.global with Subject: Application for Sr. DevOps-GCP
Posted 3 days ago
2.0 - 5.0 years
4 - 9 Lacs
noida, delhi / ncr
Hybrid
Experience in of cloud infrastructure, virtualization (VMware), and hybrid environments. Experience in Citrix, SFTP Familiarity with monitoring tools (DataDog, Grafana, Splunk) and alert management.
Posted 3 days ago
3.0 - 7.0 years
0 Lacs
pune, maharashtra, india
On-site
Hi, Exp: 3-7 Years Skills: GCP.Python,PLSQL,ETL If interested please share resume at deepika.eaga@quesscorp.com
Posted 3 days ago
12.0 years
0 Lacs
pune, maharashtra, india
Remote
Who We Are Addepar is a global technology and data company that helps investment professionals provide the most informed, precise guidance for their clients. Hundreds of thousands of users have entrusted Addepar to empower smarter investment decisions and better advice over the last decade. With client presence in more than 50 countries, Addepar's platform aggregates portfolio, market and client data for over $8 trillion in assets. Addepar's open platform integrates with more than 100 software, data and services partners to deliver a complete solution for a wide range of firms and use cases. Addepar embraces a global flexible workforce model with offices in New York City, Salt Lake City, Chicago, London, Edinburgh, Pune and Dubai. The Role Addepar is looking for an experienced SDET Manager to help build the product SDET team at Addepar. As SDET manager you will be responsible for implementation of the quality engineering processes to deliver the trading solution on time and with high quality. This is a strategic and highly visible role, with the SDET manager partnering with both external and internal teams, including Solutions Architecture, Data Solutions, Product, Engineering, and release team. What You’ll Do Own Quality Strategy Develop and execute comprehensive test strategy and test plans for Trading application Develop and maintain technical documentation of testing processes and results Drive end-to-end quality initiatives for team features and releases Design and implement automated test suites for critical business flow Collaborate with the development team to identify opportunities in unit and integration tests Develop automated testing strategies that can run in a CI/CD environment Analysis and Reporting Participate and represent QE function in Go-NoGo decision Coordinate with global teams of SDETs, Dev, and Product for release testing Manage defect tracking, analyze defect leakage to identify areas for improvement Participate in incident postmortems and drive preventive measures Create, maintain and report on quality dashboards for visibility Production Quality and Monitoring Monitor production releases and validate deployment success Prepare and own Production Verification tests Analyze production incidents and propose improvements Technical Leadership Stay up to date with industry trends and emerging technologies in Quality Engineering Mentor junior team members on testing practices Provide testing consultation and guidance to development teams Help with tooling and actively participate in tool selection process Who You Are At least 12 years of experience in software testing and quality assurance for front office applications in investment banking, wealth management Strong understanding of software testing principles, including black box, white box Hands-on experience in defining, designing, and executing acceptance criteria based scenarios Strong knowledge of Quality Engineering in Agile ways of working Strong Knowledge of CI/CD tools Experience leading SDET teams for multiple products and managing delivery outcomes Knowledge of Equity products and product lifecycle is a must Ability to work in a fast-paced, high-pressure team with tight deadlines and deliver on time Experience working with a global team and good communication skills Strong problem-solving skills and attention to detail. Expertise in Jira and knowledge of DORA metrics is must Experience with Kubernetes, Databricks and cloud platforms (AWS/Azure/GCP) is a huge plus Our Values Act Like an Owner - Think and operate with intention, purpose and care. Own outcomes. Build Together - Collaborate to unlock the best solutions. Deliver lasting value. Champion Our Clients - Exceed client expectations. Our clients’ success is our success. Drive Innovation - Be bold and unconstrained in problem solving. Transform the industry. Embrace Learning - Engage our community to broaden our perspective. Bring a growth mindset. In addition to our core values, Addepar is proud to be an equal opportunity employer. We seek to bring together diverse ideas, experiences, skill sets, perspectives, backgrounds and identities to drive innovative solutions. We commit to promoting a welcoming environment where inclusion and belonging are held as a shared responsibility. We will ensure that individuals with disabilities are provided reasonable accommodation to participate in the job application or interview process, to perform essential job functions, and to receive other benefits and privileges of employment. Please contact us to request accommodation. PHISHING SCAM WARNING: Addepar is among several companies recently made aware of a phishing scam involving con artists posing as hiring managers recruiting via email, text and social media. The imposters are creating misleading email accounts, conducting remote “interviews,” and making fake job offers in order to collect personal and financial information from unsuspecting individuals. Please be aware that no job offers will be made from Addepar without a formal interview process. Additionally, Addepar will not ask you to purchase equipment or supplies as part of your onboarding process. If you have any questions, please reach out to TAinfo@addepar.com.
Posted 3 days ago
5.0 years
0 Lacs
gurugram, haryana, india
On-site
Designation : Architect/Lead - DevOps Office Location: Gurgaon/Bangalore Position Description: To lead the design, implementation, and continuous improvement of our infrastructure and DevOps practices. You will act as a senior technical leader, guiding the DevOps Manager and mentoring the team to build a scalable, resilient, and cost-effective cloud infrastructure. Your expertise will be crucial in defining processes, breaking down complex tasks, and evangelizing DevOps culture across the organization. Key Responsibilities & Critical Work Functions: Infrastructure Strategy and Design: Lead the design and implementation of robust, scalable, and fault-tolerant infrastructure solutions on GCP, with a focus on high-availability and cost-optimization. Translate business and application requirements into a clear and documented infrastructure design. Define and implement best practices for infrastructure as code (IaC), CI/CD, security, and monitoring. Technical Leadership and Mentorship: Guide the DevOps Manager in technical decision-making, task breakdown, and process definition. Mentor and coach mid-level and junior DevOps engineers, fostering a culture of collaboration, flexibility, and knowledge sharing. Act as a key advocate for the DevOps mindset and culture across the organization, helping other teams to adopt DevOps principles. Automation and Process Improvement: Automate infrastructure provisioning, configuration management, and application deployments to improve efficiency and consistency. Continuously evaluate and improve CI/CD pipelines, integrating security and quality checks throughout the development lifecycle. Identify opportunities for process improvement and lead initiatives to streamline operations and reduce manual effort. Operations and Support: Build and maintain robust monitoring and alerting platforms to ensure 24x7 visibility into system health and performance. Participate in on-call rotations to respond to and resolve after-hours incidents, ensuring high availability during traffic spikes and system failures. Troubleshoot complex system issues that may cause downtime or performance degradation, and conduct root cause analysis to prevent recurrence. Required Skills and Competencies: Cloud Solutions: Deep expertise in GCP services, particularly GKE, BigQuery, EC2, RDS, S3, and IAM. Infrastructure as Code: Hands-on experience with Terraform, Ansible, or similar tools to manage infrastructure programmatically. CI/CD: Proven experience in building and managing CI/CD pipelines using tools like Jenkins, Github Actions, or similar. Containerization and Orchestration: Production-grade experience with Docker and Kubernetes for container orchestration. Networking: Strong knowledge of network protocols, load balancing (e.g., HA Proxy), and cloud networking concepts. Monitoring and Observability: Experience with monitoring tools like New Relic, Datadog, or Grafana, and log analysis in a distributed environment. Databases: Knowledge of both relational (e.g., MySQL) and NoSQL (e.g., Aerospike, MongoDB) databases. Scripting: Proficiency in Python and Bash scripting for automation. Agile Methodologies: Experience working in an Agile/Scrum environment. Communication: Excellent technical writing and communication skills, with the ability to articulate complex technical designs to both technical and non-technical audiences. Qualifications and Experience: Bachelor's degree in Computer Science, Information Technology, or a related field. 5-7+ years of experience in a senior DevOps or Infrastructure Engineering role. Proven track record of designing, implementing, and maintaining multi-tiered, production-grade infrastructure in a cloud environment. Bonus Points: Experience in designing high-availability and cloud-native solutions. Knowledge of Hadoop, AWS Kinesis, or other big data technologies. Experience in low-latency architectures (e.g., Ad Tech, Trading). A genuine interest in open-source technologies and personal projects. Solid understanding of security best practices for operating systems and cloud services. Work Environment Details: About Affle: Affle is a global technology company with a proprietary consumer intelligence platform that delivers consumer recommendations and conversions through relevant Mobile Advertising. The platform aims to enhance returns on marketing investment through contextual mobile ads and also by reducing digital ad fraud. Affle powers unique and integrated consumer journeys for marketers to drive high ROI, measurable outcome-led advertising through its Affle2.0 Consumer Platforms Stack which includes Appnext, Jampp, MAAS, mediasmart, RevX, Vizury and YouAppi. Affle 3i Limited successfully completed its IPO in India and now trades on the stock exchanges (BSE: 542752 & NSE: AFFLE). Affle Holdings is the Singapore-based promoter for Affle 3i Limited, and its investors include Microsoft, and Bennett Coleman & Company (BCCL) amongst others. For more details, please visit:: www.affle.com
Posted 3 days ago
0 years
0 Lacs
gurugram, haryana, india
On-site
Job Title: Data Engineer Location: Gurgaon / Bangalore / Chennai / Hyderabad (Hybrid Mode) About the Role We are looking for a highly skilled and experienced Data Engineer with strong expertise in backend engineering, API development, and modern cloud data platforms. The ideal candidate will be hands-on with data integrations, building scalable data pipelines, and managing cloud-based data solutions across multi-cloud environments. Key Responsibilities Design, build, and optimize data pipelines and API services for large-scale applications. Work extensively with structured and unstructured data across multiple sources. Implement data integrations, transformations, and cloud migration projects. Collaborate with cross-functional teams to deliver scalable and reliable backend solutions. Leverage cloud technologies (GCP, SpannerDB, BigQuery, Pub/Sub, Kafka) for data engineering solutions. Ensure best practices in API development, backend design, and performance optimization. Required Skills & Experience Programming: Strong expertise in Java (mandatory) and good working knowledge of Python . Backend & API: Proven experience with Spring Boot , API services, and microservices development. Databases: Hands-on experience with SQL , SpannerDB , and BigQuery . Messaging & Data Streams: Experience with Kafka and Pub/Sub . Cloud Platforms: Strong exposure to GCP and multi-cloud environments. Data Integrations & Pipelines: Skilled in designing and implementing end-to-end pipelines and data migration strategies. Preferred Qualities Excellent communication and stakeholder management skills. Strong problem-solving and analytical thinking. Team player with ability to handle multiple projects simultaneously. Adaptable and eager to work in a dynamic environment.
Posted 3 days ago
8.0 years
0 Lacs
chennai, tamil nadu, india
On-site
Software Engineer III (iSeries Engineering) Location: Location: Staples India Business Innovation Hub, Chennai About the Role: We are seeking a highly skilled and experienced RPG Developer to join our Order Management Systems (OMS) team. This role is ideal for a technical expert passionate about building and maintaining high-performance applications using RPG on IBM i (AS/400, iSeries) systems. You will play a critical role in designing, enhancing, and maintaining Order Management solutions that support business-critical operations across multiple domains with primary focus on B2B Order Creation and Management. Responsibilities: Design, develop, test, and maintain RPG-based applications on IBM i platforms. Lead enhancements and bug-fixes in Order Management Systems (OMS), with deep understanding of customer account management, order entry, order fulfillment, billing, and returns modules. Translate business requirements into high-quality, scalable, secure, and maintainable code. Work collaboratively with Product Owners, QA, and multiple application and infrastructure teams located in USA and India to deliver end-to-end OMS solutions. Conduct root cause analysis and performance tuning of the applications. Modernize legacy RPG code (e.g., RPG III to RPG IV/ILE/Free Format) where necessary. Create and maintain technical documentation, including code reviews and release notes. Comply with Change Management, PCI, PII, SOX policies and best practices. Provide technical mentorship to junior developers. Required Skills & Qualifications: 8+ years of experience in RPG development, including RPG IV, RPG ILE, and Free Format RPG . Strong hands-on experience with IBM i (AS/400, iSeries) platforms. Deep understanding of Order Management business processes . Proficiency in CL, DDS, DB2/400 , and service programs/modules. Solid experience with EDI, APIs, and integrations between RPG-based and external systems. Proven track record in performance tuning and troubleshooting RPG applications. Ability to work independently and drive development in a fast-paced environment. Strong communication skills and experience working with cross-functional teams. Preferred (Nice to Have): Experience with modernization tools like Visual Studio Code, RDi, Implementer, SQL, APIs , web services on IBM i, and AI. JAVA, Sterling OMS, Cloud (Azure, GCP), Microservices, EDA, DevOPS, CICD, ServiceNow, Jira, Confluence, Snowflake, Power BI, Copilot, Python. Familiarity with Scaled Agile Framework (SAFe)/Scrum/Kanban frameworks . Exposure to e-Commerce , Retail, Logistics domains.
Posted 3 days ago
0 years
0 Lacs
chennai, tamil nadu, india
On-site
Avant de postuler à un emploi, sélectionnez votre langue de préférence parmi les options disponibles en haut à droite de cette page. Découvrez votre prochaine opportunité au sein d'une organisation qui compte parmi les 500 plus importantes entreprises mondiales. Envisagez des opportunités innovantes, découvrez notre culture enrichissante et travaillez avec des équipes talentueuses qui vous poussent à vous développer chaque jour. Nous savons ce qu’il faut faire pour diriger UPS vers l'avenir : des personnes passionnées dotées d’une combinaison unique de compétences. Si vous avez les qualités, de la motivation, de l'autonomie ou le leadership pour diriger des équipes, il existe des postes adaptés à vos aspirations et à vos compétences d'aujourd'hui et de demain. Fiche De Poste The Intermediate Application Developer will be part of a team that is responsible for modernizing a legacy system and converting it to a Cloud based application. This application is used by UPS Operations on a daily basis world wide. The Intermediate Application Developer applies the principles of software engineering to design, develop, maintain, test, and evaluate computer software that provide business capabilities, solutions, and/or product suites. Provides systems life cycle management (e.g., analyses, technical requirements, design, coding, testing, implementation of systems and applications software, etc.) to ensure delivery of technical solutions is on time and within budget. They will research and support the integration of emerging technologies, provide knowledge and support for applications’ development, integration, and maintenance and develop program logic for new applications or analyzes and modifies logic in existing applications. They will analyze requirements, tests, and integrates application components and ensure that system improvements are successfully implemented. They may focus on web/internet applications specifically, using a variety of languages and platforms and define application complexity drivers, estimates development efforts, creates milestones and/or timelines, and tracks progress towards completion. The Intermediate Application Developer provides specific functional expertise that is theoretical and conceptual in nature. This expertise is typically acquired through a combination of university education and experience within a field. They may have responsibility for supervising others in the capacity of a “player coach” but primary focus is individual expertise. Particularly at higher levels, sophisticated subject matter expertise is a requirement for success. The Intermediate Application Developer applies in-depth conceptual and practical knowledge in own job discipline and basic knowledge of related job disciplines, solves complex problems related to own job discipline by taking a new perspective on existing solutions and builds consensus. Regularly acts as a resource for colleagues with less experience. Works independently, receives minimal guidance. Agile Engineering Best Practices Stays current on industry trends and serves as an expert on the software development lifecycle and agile engineering practices, coaching others when needed. Recommends and plans for application of agile methodologies vs. traditional methodologies, based on comparison of various approaches to achieve the most effective development outcome. Identifies appropriate agile engineering practices (e.g., Extreme Programming techniques such as pair programming and test driven development) and coaches others in applying in software development projects. Project Management Integrates timelines and milestones across projects, identifying areas of synergy or dependency. Determines actual or potential gaps in resourcing for projects and recommends strategies to mitigate. Evaluates the progress of projects and makes adjustments (e.g., to task order or timeline) to keep the project on track. Troubleshooting Conducts a deep review of data and issues to quickly reveal the root cause of problem. Recommends interim and long-term solutions to complex problems to ensure successful resolution. Executes solutions to complex problems; guides the analysis of a problem all the way to a successful resolution. Application Development/Programming Creatively tests and maintains software applications and related programs and procedures by using a variety of software development tools following design requirements of customer. System and Technology Integration Possesses knowledge of features and facilities for integration, and communication among applications, databases and technology platforms to bring together different components and form a fully functional solution to a business problem. Technology Advising/Consulting Gains insight into how customers utilize technology for their competitive advantage and applies this knowledge to suggest areas for improvement. Conveys the right information to the correct parties to ensure that proposals for improvements are given the proper consideration and technical issues are resolved in a timely manner. Contributes to product development by identifying industry change, listening to customer needs, capturing feedback and communicating that feedback to the business. Qualifications Experience with C#, SQL, API, Azure/GCP, Android, MAUI.NET Experience with Cloud technology is a plus Excellent written and verbal communication skills Ability to work independently and in a team environment Time Management Detail oriented Bachelor’s Degree or International equivalent Bachelor's Degree or International equivalent in Computer Science, Information Systems, Mathematics, Statistics or related field - Preferred Type De Contrat en CDD (durée déterminée) Chez UPS, égalité des chances, traitement équitable et environnement de travail inclusif sont des valeurs clefs auxquelles nous sommes attachés.
Posted 3 days ago
10.0 years
0 Lacs
mumbai metropolitan region
On-site
About Sun Pharma: Sun Pharmaceutical Industries Ltd. (Sun Pharma) is the fourth largest specialty generic pharmaceutical company in the world with global revenues of US$ 5.4 billion. Supported by 43 manufacturing facilities, we provide high-quality, affordable medicines, trusted by healthcare professionals and patients, to more than 100 countries across the globe. Job Summary We are seeking an experienced Data Backup Manager (Administrator) to manage and execute data backup, restoration, and disaster recovery operations across a hybrid cloud infrastructure. This role is pivotal in ensuring data protection, integrity, availability , and regulatory compliance within highly regulated environments such as Pharma, Healthcare, and BFSI . The ideal candidate will have hands-on experience managing backup platforms across on-premises and cloud environments , strong technical proficiency with tools such as Actifio, Veritas, Commvault, Rubrik, Veeam , and familiarity with GxP, 21 CFR Part 11 , and audit requirements. The role also requires a sound understanding of virtualization, storage, cloud backup , and data classification and resiliency . Roles and Responsibilities Backup & Restore Operations Manage daily operations of enterprise backup solutions, including scheduling, monitoring, and validating backup jobs. Perform regular data restore testing to verify RTO/RPO compliance. Handle escalation and resolution of backup failures, job exceptions, and system issues. Administer tape backup and archival management , including encryption, vaulting, and lifecycle tracking. Technical Platform Management Administer hybrid backup platforms across VMware/Hyper-V , SAN/NAS storage, and cloud providers (AWS, Azure, GCP). Implement and manage cloud-based backup services and immutable storage for ransomware protection. Create and maintain HLDs/LLDs , SOPs, validation scripts, and compliance documentation. Compliance, Audit & Validation Ensure all backup systems and operations align with GxP , 21 CFR Part 11 , HIPAA , and corporate audit standards. Participate in internal and external audits and assist in the preparation of documentation and evidence. Maintain system validation lifecycle documentation for regulated applications. Data Governance & Resiliency Implement data classification , tagging, and backup policies based on sensitivity and criticality. Support business continuity and DR planning by ensuring recoverability of mission-critical data. Monitor storage usage trends and manage data archival and retention strategies. Project & Vendor Coordination Collaborate with infrastructure and application teams for backup onboarding of new systems. Support the Backup Lead in preparing RFPs, RFIs, BOMs, and project execution activities. Interface with OEMs and vendors to ensure service delivery, upgrades, patching, and compliance. Reporting & Stakeholder Communication Generate and share weekly/monthly backup performance and compliance reports. Maintain dashboards and documentation for internal IT teams and external auditors. Provide input to leadership on capacity planning, license utilization, and upgrade planning Job Scope: Internal Interactions (within the organization) : IT Functional Team across globe External Interactions (outside the organization) : Vendors & OEM’s Geographical Scope : Global Job Requirements Educational Qualification : Bachelor’s degree in Computer Science, Information Technology, or a related field Specific Certification : Certification & Trainings on following technology domains: Certifications such as Veeam VMCE, Commvault Professional, Veritas Certified Specialist, AWS/Azure/GCP cloud certifications, or equivalent. Project management certification (PMP, PRINCE2) and familiarity with ITIL. Experience in regulated industries: Pharma, Healthcare, BFSI. Familiarity with data classification tools, encryption methods, and DLP policies. Knowledge of data immutability, air-gapping strategies, and ransomware recovery best practices. Required Qualifications 8–10 years of relevant experience in data backup administration and recovery Hands-on expertise with at least 3–4 major backup platforms: Actifio, Veritas, Commvault, Rubrik, Veeam, Google Cloud Backup Solid experience in virtualization platforms , SAN/NAS/Object storage systems Strong understanding of tape and cloud archival , data classification , immutability , and resiliency strategies Knowledge and experience in regulated environments with compliance standards such as GxP, 21 CFR Part 11, HIPAA, ISO 27001 Proactive and analytical mindset with a passion for operational excellence. Strong troubleshooting skills and attention to detail. Selection Process: Interested Candidates are mandatorily required to apply through the listing on Jigya. Only applications received through Jigya will be evaluated further. Shortlisted candidates may need to appear in an Online Assessment and/or a Technical Screening interview administered by Jigya, on behalf on Sun Pharma Candidates selected after the screening rounds will be processed further by Sun Pharma
Posted 3 days ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
55803 Jobs | Dublin
Wipro
24489 Jobs | Bengaluru
Accenture in India
19138 Jobs | Dublin 2
EY
17347 Jobs | London
Uplers
12706 Jobs | Ahmedabad
IBM
11805 Jobs | Armonk
Bajaj Finserv
11514 Jobs |
Amazon
11476 Jobs | Seattle,WA
Accenture services Pvt Ltd
10903 Jobs |
Oracle
10677 Jobs | Redwood City