Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
12.0 years
1 - 8 Lacs
Cochin
On-site
Job Information We are looking for a highly skilled and experienced .NET Architect to lead the design, development, and deployment of enterprise-grade applications using Microsoft technologies. The ideal candidate will have deep expertise in .NET architecture, cloud computing, microservices, and secure API development. You will collaborate with cross-functional teams to drive innovation, scalability, and performance. Your Responsibilities Design end-to-end architecture for scalable and maintainable enterprise applications using .NET (Core/Framework). Provide technical leadership and guidance to development teams, ensuring adherence to best practices. Define architectural standards, design patterns, and governance processes. Lead solution design using Microservices, Clean Architecture, and Domain-Driven Design (DDD). Review code and architecture, ensuring quality, performance, and security compliance. Architect and deploy applications on Azure (App Services, Functions, API Gateway, Key Vault, etc.). · Collaborate with product owners, business analysts, and stakeholders to convert business needs into technical solutions.· Implement DevOps pipelines for continuous integration and deployment (CI/CD) using Azure DevOps or GitHub Actions. Oversee security architecture including authentication (OAuth 2.0, OpenID Connect) and data protection. Develop proof-of-concepts (PoCs) and technical prototypes to validate solution approaches. Required Skills 12+ years of experience in software development using Microsoft technologies. · 3+ years in an architectural or senior design role.· Proficiency in C#, ASP.NET Core, Web API, Entity Framework, LINQ.· Strong experience in Microservices architecture and distributed systems. Expertise in Azure services (App Services, Azure Functions, Blob Storage, Key Vault, etc.) · Hands-on with CI/CD, DevOps, Docker, Kubernetes.· Deep understanding of SOLID principles, design patterns, and architectural best practices. Experience in secure coding practices and API security (JWT, OAuth2, IdentityServer). Strong background in relational and NoSQL databases (SQL Server, Cosmos DB, MongoDB). Excellent communication, leadership, and documentation skills. Preferred Qualifications Microsoft Certified: Azure Solutions Architect Expert or equivalent certification. Experience with frontend frameworks (React, Angular, Blazor) is a plus. Knowledge of event-driven architecture and message queues (e.g., Kafka, RabbitMQ). Exposure to Infrastructure as Code (Terraform, ARM, Bicep). Experience working in Agile/Scrum environments. Experience 12+ Years Work Location Kochi Work Type Full Time Please send your resume to careers@cabotsolutions.com
Posted 6 days ago
5.0 years
0 Lacs
Cochin
On-site
Job Position: Magento Developer Location: Kochi/Bangalore Experience: 5-15years Mandate: Magento1 and Magento 2 experience along with Migration familiar in RESTful APIs and GraphQL Experience in headless architecture Strong knowledge in Magento Indexing & Caching. Experience in customization using 3rd party search module Job Description: Overall 3+ years of experience working on Magento / Adobe Commerce Cloud. Prefer someone with over 5 years of experience in various capacities in Retail Domain . Deep Knowledge in Magento 2 +, preferring a full stack mindset Should have a good understanding of all sub-systems in eCommerce including User Management, Catalog / Product / Browse / Search, Promotions & Pricing, Payments, Cart & Checkout, Tax, Address validations, Checkout, Place Order, Backend jobs and processes etc. Prefer someone working on a composable paradigm with knowledge of disparate components for CMS (AEM, Contentful etc), Search (Constructor, Bloomreach etc), Loyalty, PWA for experience layer, International Shipping etc Able to build custom reusable modules from scratch Deep understanding of Magento 2 architecture and best practices. Should be familiar in RESTful APIs and GraphQL Capable of extending GraphQL schemas for custom modules. Strong knowledge in Magento Indexing & Caching Proven experience in writing and managing backend batch jobs, data syncs and cron-based processes. Create and optimize custom scheduled jobs and asynchronous background processes (e.g., order sync, catalog imports). Solid MySQL and database schema design experience, including indexing and optimization. Optimize database queries, indexing strategies, and backend performance across Magento and related services. Proficient in developing and consuming REST/SOAP APIs. Recommended to have experience with message queues (RabbitMQ, Kafka, or similar). Third-party Service Integration – Prefer someone with experience in integration aspects including ERPs, CRMs, OMS, Payment Gateways etc. Experience in working with multi-website/multi -store/store-views/brands with support to multi-language & multi-currency Proficient in PHP and MySQL Exposure to headless architecture or PWA Studio is an advantage. Good grasp of Agile/Scrum methodologies and tools like Jira. Collaborate with cross-functional teams including UI/UX designers, product managers, and QA to ensure quality and timely delivery. Optimize site performance and scalability; perform code reviews and ensure coding standards. Troubleshoot and resolve complex technical issues in a timely manner. Recommend to have someone with Adobe certification (Professional / Expert) Experience in test-driven development (TDD), integration testing, and end-to-end testing using Junit, Mockito, RestAssured, etc. Experience with Continuous Integration Delivery models such as Azure DevOps, including Git, CI/CD pipelines and IaC Good to Have Skills: Demonstrable understanding of infrastructure and application security management, in the context of developing and operating large-scale multi-tenant systems Broad knowledge of contemporary technologies and frameworks blended with experience of working with relevant ones (RESTful web services, database) Job Type: Full-time Pay: ₹269,271.01 - ₹2,590,380.65 per year Work Location: In person
Posted 6 days ago
4.0 years
7 - 9 Lacs
Gurgaon
On-site
As the global leader in high-speed connectivity, Ciena is committed to a people-first approach. Our teams enjoy a culture focused on prioritizing a flexible work environment that empowers individual growth, well-being, and belonging. We’re a technology company that leads with our humanity—driving our business priorities alongside meaningful social, community, and societal impact. How You Will Contribute: As a Senior Software Developer within the Blue Planet team, you will play a key role in designing, developing, testing, and supporting scalable software solutions tailored for carrier-class networks and cloud environments. This role requires a strong technical foundation, attention to detail, and a collaborative mindset to deliver high-quality, modular code that is built to scale and last. You will: Work closely with cross-functional teams to design and develop high-performing software modules and features. Write and maintain backend and frontend code with strong emphasis on quality, performance, and maintainability. Support system design, documentation, and end-to-end development including unit testing and debugging. Participate in global agile development teams to deliver against project priorities and milestones. Contribute to the development of telecom inventory management solutions integrated with cloud platforms and advanced network technologies. The Must Haves: Bachelor's or Master’s degree in Computer Science, Engineering, or a related technical field. 4+ years of software development experience. Backend: Java 11+, Spring (Security, Data, MVC), SpringBoot, J2EE, Maven, JUnit. Frontend: TypeScript, JavaScript, Angular 2+, HTML, CSS, SVG, Protractor, Jasmine. Databases: Neo4j (Graph DB), PostgreSQL, TimescaleDB. Experience with SSO implementations (LDAP, SAML, OAuth2). Proficiency with Docker, Kubernetes, and cloud platforms (preferably AWS). Strong understanding of algorithms, data structures, and software design patterns. Assets: Experience with ElasticSearch, Camunda/BPMN, Drools, Kafka integration. Knowledge of RESTful APIs using Spring MVC. Knowledge in Inventory Management Systems (e.g., Cramer, Granite, Metasolv). Familiarity with tools like Node.js, Gulp, and build/test automation. Exposure to telecom/networking technologies such as DWDM/OTN, SONET, MPLS, GPON, FTTH. Understanding of OSS domains and exposure to telecom network/service topology and device modeling. Prior experience working in a global, agile development environment. #LI-FA Not ready to apply? Join our Talent Community to get relevant job alerts straight to your inbox. At Ciena, we are committed to building and fostering an environment in which our employees feel respected, valued, and heard. Ciena values the diversity of its workforce and respects its employees as individuals. We do not tolerate any form of discrimination. Ciena is an Equal Opportunity Employer, including disability and protected veteran status. If contacted in relation to a job opportunity, please advise Ciena of any accommodation measures you may require.
Posted 6 days ago
2.0 years
4 - 10 Lacs
Gurgaon
On-site
Expedia Group brands power global travel for everyone, everywhere. We design cutting-edge tech to make travel smoother and more memorable, and we create groundbreaking solutions for our partners. Our diverse, vibrant, and welcoming community is essential in driving our success. Why Join Us? To shape the future of travel, people must come first. Guided by our Values and Leadership Agreements, we foster an open culture where everyone belongs, differences are celebrated and know that when one of us wins, we all win. We provide a full benefits package, including exciting travel perks, generous time-off, parental leave, a flexible work model (with some pretty cool offices), and career development resources, all to fuel our employees' passion for travel and ensure a rewarding career journey. We’re building a more open world. Join us. Why Join Us? Are you an technologist who is passionate about building robust, scalable, and performant applications & data products? This is exactly what we do, join Data Engineering & Tooling Team! Data Engineering & Tooling Team (part of Enterprise Data Products at Expedia) is responsible for making traveler, partner & supply data accessible, unlocking insights and value! Our Mission is to build and manage the travel industry's premier Data Products and SDKs. Software Development Engineer II Introduction to team Our team is looking for an Software Engineer who applies engineering principles to build & improve existing systems. We follow Agile principles, and we're proud to offer a dynamic, diverse and collaborative environment where you can play an impactful role and build your career. Would you like to be part of a Global Tech company that does Travel? Don't wait, Apply Now! In this role, you will - Implement products and solutions that are highly scalable with high-quality, clean, maintainable, optimized, modular and well-documented code across the technology stack. [OR - Writing code that is clean, maintainable, optimized, modular.] Crafting API's, developing and testing applications and services to ensure they meet design requirements. Work collaboratively with all members of the technical staff and other partners to build and ship outstanding software in a fast-paced environment. Applying knowledge of software design principles and Agile methodologies & tools. Resolve problems and roadblocks as they occur with help from peers or managers. Follow through on details and drive issues to closure. Assist with supporting production systems (investigate issues and working towards resolution). Experience and qualifications: Bachelor's degree or Masters in Computer Science & Engineering, or a related technical field; or equivalent related professional experience. 2+ years of software development or data engineering experience in an enterprise-level engineering environment. Proficient with Object Oriented Programming concepts with a strong understanding of Data Structures, Algorithms, Data Engineering (at scale), and Computer Science fundamentals. Experience with Java, Scala, Spring framework, Micro-service architecture, Orchestration of containerized applications along with a good grasp of OO design with strong design patterns knowledge. Solid understanding of different API types (e.g. REST, GraphQL, gRPC), access patterns and integration. Prior knowledge & experience of NoSQL databases (e.g. ElasticSearch, ScyllaDB, MongoDB). Prior knowledge & experience of big data platforms, batch processing (e.g. Spark, Hive), stream processing (e.g. Kafka, Flink) and cloud-computing platforms such as Amazon Web Services. Knowledge & Understanding of monitoring tools, testing (performance, functional), application debugging & tuning. Good communication skills in written and verbal form with the ability to present information in a clear and concise manner. Accommodation requests If you need assistance with any part of the application or recruiting process due to a disability, or other physical or mental health conditions, please reach out to our Recruiting Accommodations Team through the Accommodation Request. We are proud to be named as a Best Place to Work on Glassdoor in 2024 and be recognized for award-winning culture by organizations like Forbes, TIME, Disability:IN, and others. Expedia Group's family of brands includes: Brand Expedia®, Hotels.com®, Expedia® Partner Solutions, Vrbo®, trivago®, Orbitz®, Travelocity®, Hotwire®, Wotif®, ebookers®, CheapTickets®, Expedia Group™ Media Solutions, Expedia Local Expert®, CarRentals.com™, and Expedia Cruises™. © 2024 Expedia, Inc. All rights reserved. Trademarks and logos are the property of their respective owners. CST: 2029030-50 Employment opportunities and job offers at Expedia Group will always come from Expedia Group’s Talent Acquisition and hiring teams. Never provide sensitive, personal information to someone unless you’re confident who the recipient is. Expedia Group does not extend job offers via email or any other messaging tools to individuals with whom we have not made prior contact. Our email domain is @expediagroup.com. The official website to find and apply for job openings at Expedia Group is careers.expediagroup.com/jobs. Expedia is committed to creating an inclusive work environment with a diverse workforce. All qualified applicants will receive consideration for employment without regard to race, religion, gender, sexual orientation, national origin, disability or age.
Posted 6 days ago
0 years
0 Lacs
India
Remote
Company Description At Trigonal AI, we specialize in building and managing end-to-end data ecosystems that empower businesses to make data-driven decisions with confidence. From data ingestion to advanced analytics, we offer the expertise and technology to transform data into actionable insights. Our core services include data pipeline orchestration, real-time analytics, and business intelligence & visualization. We use modern technologies such as Apache Airflow, Kubernetes, Apache Druid, Kafka, and leading BI tools to create reliable and scalable solutions. Let us help you unlock the full potential of your data. Role Description This is a full-time remote role for a Business Development Specialist. The specialist will focus on day-to-day tasks including lead generation, market research, customer service, and communication with potential clients. The role also includes analytical tasks and collaborating with the sales and marketing teams to develop and implement growth strategies. Qualifications Strong Analytical Skills for data-driven decision-making Effective Communication skills for engaging with clients and team members Experience in Lead Generation and Market Research Proficiency in Customer Service to maintain client relationships Proactive and independent work style Experience in the tech or data industry is a plus Bachelor's degree in Business, Marketing, or related field
Posted 6 days ago
7.0 years
21 Lacs
Gurgaon
On-site
Job Title: Data Engineer Location: Gurgaon (Onsite) Experience: 7+ Years Employment Type: Contract 6 month Job Description: We are seeking a highly experienced Data Engineer with a strong background in building scalable data solutions using Azure/AWS Databricks , Scala/Python , and Big Data technologies . The ideal candidate should have a solid understanding of data pipeline design, optimization, and cloud-based deployments. Key Responsibilities: Design and build data pipelines and architectures on Azure or AWS Optimize Spark queries and Databricks workloads Manage structured/unstructured data using best practices Implement scalable ETL processes with tools like Airflow, Kafka, and Flume Collaborate with cross-functional teams to understand and deliver data solutions Required Skills: Azure/AWS Databricks Python / Scala / PySpark SQL, RDBMS Hive / HBase / Impala / Parquet Kafka, Flume, Sqoop, Airflow Strong troubleshooting and performance tuning in Spark Qualifications: Bachelor’s degree in IT, Computer Science, Software Engineering, or related Minimum 7+ years of experience in Data Engineering/Analytics Apply Now if you're looking to join a dynamic team working with cutting-edge data technologies! Job Type: Contractual / Temporary Contract length: 6 months Pay: From ₹180,000.00 per month Work Location: In person
Posted 6 days ago
2.0 years
8 - 9 Lacs
Gurgaon
On-site
Overview: The role will play a pivotal role in software development activities and collaboration across the Strategy & Transformation (S&T) organization. Software Engineering is the cornerstone of scalable digital transformation across PepsiCo’s value chain. Work across the full stack, building highly scalable distributed solutions that enable positive user experiences. The role requires to deliver the best possible software solutions, customer obsessed and ensure they are generating incremental value. The engineer is expected to work closely with the user experience, product, IT, and process engineering teams to develop new products and prioritize deliver solutions across S&T core priorities. The ideal candidate should have foundational knowledge of both front-end and back-end technologies, a passion for learning, and the ability to work in a collaborative environment. Responsibilities: Assist in designing, developing, and maintaining scalable web applications. Collaborate with senior developers and designers to implement features from concept to deployment. Work on both front-end (React, Angular, Vue.js, etc.) and back-end (Node.js, Python, Java etc.) development tasks. Develop and consume RESTful APIs and integrate third-party services. Participate in code reviews, testing, and bug fixing. Write clean, maintainable, and well-documented code. Stay updated on emerging technologies and industry best practices. Qualifications: Minimum Qualifications: A Bachelor’s Degree in Computer Science or a related field 2+ years of relevant software development. Commanding knowledge of data structures, algorithms, and object-oriented design. Strong system design fundamentals and experience building distributed scalable systems. Expertise in Java and its related technologies. Restful or GraphQL API (preferred) experience. Expertise in Java and Spring / SpringBoot ecosystem, JUnit, BackEnd MicroServices, Serverless Computing. Experience with JavaScript/TypeScript, Node.js, React or React Native or related frameworks. Experience with large scale messaging systems such as Kafka is a bonus. Experience is non SQL DB is good to have. Hands on experience with any cloud platform such as AWS or GCP or Azure (preferred). Qualities Strong attention to detail and extremely well-organized Ability to work cross functionally with product, service design and operations across the organization. Demonstrated passion for excellence with respect to Engineering services, education, and support. Strong interpersonal skills, ability to navigate through a complex and matrixed internal environment. Ability to work collaboratively with regional and global partners in other functional units.
Posted 6 days ago
8.0 years
0 Lacs
Pune, Maharashtra, India
On-site
Job Reference # 323129BR Job Type Full Time Your role The individual in this role will be accountable for successful and timely delivery of projects in an agile environment where digital products are designed and built using cutting-edge technology for WMA clients and Advisors.. It is a devops role that entails working with teams located in Budapest – Hungary, Wroclaw - Poland, Pune - India and New Jersey, US. This role will include, but not be limited to, the following: maintain and build ci/cd pipelines migrate applications to cloud environment build scripts and dashboards for monitoring health of application build tools to reduce occurrence of errors and improve customer experience deployment of changes in prod and non-prod environments follow release management processes for application releases maintain stability of non-prod environments work with development, qa and support groups in trouble shooting environment issues Your team You'll be working as an engineering leader in the Client Data and Onboarding Team in India. We are responsible for WMA (Wealth Management Americas) client facing technology applications. This leadership role entails working with teams in US and India. You will play an important role of ensuring scalable development methodology is followed across multiple teams and participate in strategy discussions with business, and technology strategy discussions with architects. Our culture centers around innovation, partnership, transparency, and passion for the future. Diversity helps us grow, together. That’s why we are committed to fostering and advancing diversity, equity, and inclusion. It strengthens our business and brings value to our clients. Your expertise should carry 8+ years of experience to develop, build and maintain gitlab CI/CD pipelines use containerization technologies, orchestration tools (k8s), build tools (maven, gradle), VCS (gitlab), Sonar, Fortify tools to build robust deploy and release infrastructure deploy changes in prod and non prod Azure cloud infrastructure using helm, terraform, ansible and setup appropriate observability measures build scripts (bash, python, puppet) and dashboards for monitoring health of applications (AppDynamics, Splunk, AppInsights) possess basic networking knowledge (load balancing, ssh, certificates), middleware knowledge (MQ, Kafka, Azure Service Bus, Event hub) follow release management processes for application releases Maintain stability of non-prod environments Work with development, QA and support groups in trouble shooting environment issues About Us UBS is the world’s largest and the only truly global wealth manager. We operate through four business divisions: Global Wealth Management, Personal & Corporate Banking, Asset Management and the Investment Bank. Our global reach and the breadth of our expertise set us apart from our competitors. We have a presence in all major financial centers in more than 50 countries. How We Hire We may request you to complete one or more assessments during the application process. Learn more Join us At UBS, we know that it's our people, with their diverse skills, experiences and backgrounds, who drive our ongoing success. We’re dedicated to our craft and passionate about putting our people first, with new challenges, a supportive team, opportunities to grow and flexible working options when possible. Our inclusive culture brings out the best in our employees, wherever they are on their career journey. We also recognize that great work is never done alone. That’s why collaboration is at the heart of everything we do. Because together, we’re more than ourselves. We’re committed to disability inclusion and if you need reasonable accommodation/adjustments throughout our recruitment process, you can always contact us. Disclaimer / Policy Statements UBS is an Equal Opportunity Employer. We respect and seek to empower each individual and support the diverse cultures, perspectives, skills and experiences within our workforce.
Posted 6 days ago
0 years
0 Lacs
Pune, Maharashtra, India
On-site
Job Reference # 322638BR Job Type Full Time Your role Do you have proven track record of building scalable application to support Firmwide data distribution infrastructure? Are you confident at iteratively refining user requirements and removing any ambiguity? Do you want to design and build best-in-class, enterprise scale applications using the latest technologies? Develop web services and share development expertise about best practices Implement solutions which is scalable by applying right design principles & UBS practices Work with leaders and deliver the requirements Collaborate to refine user requirements Analyze root causes of incidents, document and provide answers Apply methodical approach to software solutions through open discussions Perform regular code reviews and share results with colleagues Your team You’ll be working within the Group Chief Technology Organization. We provide Engineering services to all business divisions of the UBS group. The team partners with different divisions and functions across the Bank to develop innovative digital solutions and expand our technical expertise into new areas. As an experienced full stack developer, you'll play an important role in building group-wide web services that help build a robust world class data distribution platform. Your expertise Proven track record of Fullstack development using in Java, Spring-boot, JPA and React Excellent understanding and hands-on experience of Core Java, Spring, Spring Boot and Microservices Must have a good understanding of Cloud Native microservice architecture, database concept, Cloud Fundamentals and Gitlab Hands on experience in web development (React, Angular, EXT JS) Experience with Relational Database (Postgresql) Unit testing framework (i.e. Junit) Proven delivery experience in Kafka ecosystem with cluster/broker implementation and topic producer/consumer performance improvement experience. Cloud implementation and deployment experience is a plus About Us UBS is the world’s largest and the only truly global wealth manager. We operate through four business divisions: Global Wealth Management, Personal & Corporate Banking, Asset Management and the Investment Bank. Our global reach and the breadth of our expertise set us apart from our competitors. We have a presence in all major financial centers in more than 50 countries. How We Hire We may request you to complete one or more assessments during the application process. Learn more Join us At UBS, we know that it's our people, with their diverse skills, experiences and backgrounds, who drive our ongoing success. We’re dedicated to our craft and passionate about putting our people first, with new challenges, a supportive team, opportunities to grow and flexible working options when possible. Our inclusive culture brings out the best in our employees, wherever they are on their career journey. We also recognize that great work is never done alone. That’s why collaboration is at the heart of everything we do. Because together, we’re more than ourselves. We’re committed to disability inclusion and if you need reasonable accommodation/adjustments throughout our recruitment process, you can always contact us. Disclaimer / Policy Statements UBS is an Equal Opportunity Employer. We respect and seek to empower each individual and support the diverse cultures, perspectives, skills and experiences within our workforce.
Posted 6 days ago
3.0 years
0 Lacs
Guwahati, Assam, India
On-site
We are seeking a highly skilled Software Engineer with strong Python expertise and a solid understanding of data engineering principles to join our team. The ideal candidate will work on developing and optimizing scalable applications and data workflows, integrating diverse data sources, and supporting the development of data-driven products. This role requires hands-on experience in software development, data modeling, ETL/ELT pipelines, APIs, and cloud-based data systems. You will collaborate closely with product, data, and engineering teams to build high-quality, maintainable, and efficient solutions that support analytics, machine learning, and business intelligence initiatives. Roles and Responsibilities Software Development Design, develop, and maintain Python-based applications, APIs, and microservices with a strong focus on performance, scalability, and reliability. Write clean, modular, and testable code following best software engineering practices. Participate in code reviews, debugging, and optimization of existing applications. Integrate third-party APIs and services as required for application features or data ingestion. Data Engineering Build and optimize data pipelines (ETL/ELT) for ingesting, transforming, and storing structured and unstructured data. Work with relational and non-relational databases, ensuring efficient query performance and data integrity. Collaborate with the analytics and ML teams to ensure data availability, quality, and accessibility for downstream use cases. Implement data modeling, schema design, and version control for data pipelines. Cloud & Infrastructure Deploy and manage solutions on cloud platforms (AWS/Azure/GCP) using services such as S3, Lambda, Glue, BigQuery, or Snowflake. Implement CI/CD pipelines and participate in DevOps practices for automated testing and deployment. Monitor and optimize application and data pipeline performance using observability tools. Collaboration & Strategy Work cross-functionally with software engineers, data scientists, analysts, and product managers to understand requirements and translate them into technical solutions. Provide technical guidance and mentorship to junior developers and data engineers as needed. Document architecture, code, and processes to ensure maintainability and knowledge sharing. Required Skills Bachelor’s/Master’s degree in Computer Science, Engineering, or a related field. 3+ years of experience in Python software development. Strong knowledge of data structures, algorithms, and object-oriented programming. Hands-on experience in building data pipelines (Airflow, Luigi, Prefect, or custom ETL frameworks). Proficiency with SQL and database systems (PostgreSQL, MySQL, MongoDB, etc.). Experience with cloud services (AWS/GCP/Azure) and containerization (Docker, Kubernetes). Familiarity with message queues/streaming platforms (Kafka, Kinesis, RabbitMQ) is a plus. Strong understanding of APIs, RESTful services, and microservice architectures. Knowledge of CI/CD pipelines, Git, and testing frameworks (PyTest, UnitTest). APPLY THROUGH THIS LINK Application link- https://forms.gle/WedXcaM6obARcLQS6
Posted 6 days ago
0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Overview Leading AI-driven Global Supply Chain Solutions Software Product Company and one of Glassdoor’s “Best Places to Work” Seeking an astute individual that has a strong technical foundation with the additional ability to be hands-on with the broader engineering team as part of the development/deployment cycle, and deep knowledge of industry best practices, Data Science and Machine Learning experience with the ability to implement them working with both the platform, and the product teams. Scope Our machine learning platform ingests data in real time, processes information from millions of retail items to serve deep learning models and produces billions of predictions on a daily basis. Blue Yonder Data Science and Machine Learning team works closely with sales, product and engineering teams to design and implement the next generation of retail solutions. Data Science team members are tasked with turning both small, sparse and massive data into actionable insights with measurable improvements to the customer bottom line. Our Current Technical Environment Software: Python 3.* Frameworks/Others: TensorFlow, PyTorch, BigQuery/Snowflake, Apache Beam, Kubeflow, Apache Flink/Dataflow, Kubernetes, Kafka, Pub/Sub, TFX, Apache Spark, and Flask. Application Architecture: Scalable, Resilient, Reactive, event driven, secure multi-tenant Microservices architecture. Cloud: Azure What We Are Looking For Bachelor’s Degree in Computer Science or related fields; graduate degree preferred. Solid understanding of data science and deep learning foundations. Proficient in Python programming with a solid understanding of data structures. Experience working with most of the following frameworks and libraries: Pandas, NumPy, Keras, TensorFlow, Jupyter, Matplotlib etc. Expertise in any database query language, SQL preferred. Familiarity with Big Data tech such as Snowflake , Apache Beam/Spark/Flink, and Databricks. etc. Solid experience with any of the major cloud platforms, preferably Azure and/or GCP (Google Cloud Platform). Reasonable knowledge of modern software development tools, and respective best practices, such as Git, Jenkins, Docker, Jira, etc. Familiarity with deep learning, NLP, reinforcement learning, combinatorial optimization etc. Provable experience guiding junior data scientists in official or unofficial setting. Desired knowledge of Kafka, Redis, Cassandra, etc. What You Will Do As a Senior Data Scientist, you serve as a specialist in the team that supports the team with following responsibilities. Independently, or alongside junior scientists, implement machine learning models by Procuring data from platform, client, and public data sources. Implementing data enrichment and cleansing routines Implementing features, preparing modelling data sets, feature selection, etc. Evaluating candidate models, selecting, and reporting on test performance of final one Ensuring proper runtime deployment of models, and Implementing runtime monitoring of model inputs and performance in order to ensure continued model stability. Work with product, sales and engineering teams helping shape up the final solution. Use data to understand patterns, come up with and test hypothesis; iterate. Help prepare sales materials, estimate hardware requirements, etc. Attend client meetings, online and onsite, to discuss new and current functionality Our Values If you want to know the heart of a company, take a look at their values. Ours unite us. They are what drive our success – and the success of our customers. Does your heart beat like ours? Find out here: Core Values All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, disability or protected veteran status.
Posted 6 days ago
0 years
3 - 6 Lacs
Chennai
On-site
LTTS India Chennai Job Description You should definitely have: Bachelor's degree in computer science, computer engineering, or related technologies. Seven years of experience in systems engineering within the networking industry. Expertise in Linux deployment, scripting and configuration. Expertise in TCP/IP communications stacks and optimizations Experience with ELK (Elasticsearch, Logstash, Kibana), Grafana data streaming (e.g., Kafka), and software visualization. Experience in analyzing and debugging code defects in the Production Environment. Proficiency in version control systems such as GIT. Ability to design comprehensive test scenarios for systems usability, execute tests, and prepare detailed reports on effectiveness and defects for production teams. Full-cycle Systems Engineering experience covering Requirements capture, architecture, design, development, and system testing. Demonstrated ability to work independently and collaboratively within cross-functional teams. Proficient in installing, configuring, debugging, and interpreting performance analytics to monitor, aggregate, and visualize key performance indicators over time. Proven track record of directly interfacing with customers to address concerns and resolve issues effectively. Strong problem-solving skills, capable of driving resolutions autonomously without senior engineer support. Experience in configuring MySQL and PostgreSQL, including setup of replication, troubleshooting, and performance improvement. Proficiency in networking concepts such as network architecture, protocols (TCP/IP, UDP), routing, VLANs, essential for deploying new system servers effectively. Proficiency in scripting language Shell/Bash, in Linux systems. Proficient in utilizing, modifying, troubleshooting, and updating Python scripts and tools to refine code. Excellent written and verbal communication skills. Ability to document processes, procedures, and system configurations effectively. Ability to Handle Stress and Maintain Quality. This includes resilience to effectively manage stress and pressure, as well as a demonstrated ability to make informed decisions, particularly in high-pressure situations. Excellent written and verbal communication skills. It includes the ability to document processes, procedures, and system configurations effectively. It is required for this role to be on-call 24/7 to address service-affecting issues in production. It is required to work during the business hours of Chicago, aligning with local time for effective coordination and responsiveness to business operations and stakeholders in the region. It would be nice if you had: Solid software development experience in the Python programming language, with the ability to understand, execute, and debug issues, as well as develop new tools using Python. Experience in design, architecture, traffic flows, configuration, debugging, and deploying Deep Packet Inspection (DPI) systems. Proficient in managing and configuring AAA systems (Authentication, Authorization, and Accounting). Job Requirement ELK, Grafana
Posted 6 days ago
25.0 years
0 - 0 Lacs
Chennai
On-site
The Company PayPal has been revolutionizing commerce globally for more than 25 years. Creating innovative experiences that make moving money, selling, and shopping simple, personalized, and secure, PayPal empowers consumers and businesses in approximately 200 markets to join and thrive in the global economy. We operate a global, two-sided network at scale that connects hundreds of millions of merchants and consumers. We help merchants and consumers connect, transact, and complete payments, whether they are online or in person. PayPal is more than a connection to third-party payment networks. We provide proprietary payment solutions accepted by merchants that enable the completion of payments on our platform on behalf of our customers. We offer our customers the flexibility to use their accounts to purchase and receive payments for goods and services, as well as the ability to transfer and withdraw funds. We enable consumers to exchange funds more safely with merchants using a variety of funding sources, which may include a bank account, a PayPal or Venmo account balance, PayPal and Venmo branded credit products, a credit card, a debit card, certain cryptocurrencies, or other stored value products such as gift cards, and eligible credit card rewards. Our PayPal, Venmo, and Xoom products also make it safer and simpler for friends and family to transfer funds to each other. We offer merchants an end-to-end payments solution that provides authorization and settlement capabilities, as well as instant access to funds and payouts. We also help merchants connect with their customers, process exchanges and returns, and manage risk. We enable consumers to engage in cross-border shopping and merchants to extend their global reach while reducing the complexity and friction involved in enabling cross-border trade. Our beliefs are the foundation for how we conduct business every day. We live each day guided by our core values of Inclusion, Innovation, Collaboration, and Wellness. Together, our values ensure that we work together as one global team with our customers at the center of everything we do – and they push us to ensure we take care of ourselves, each other, and our communities. Job Summary: What you need to know about the role- As a member of the Observability Platform team, you will be responsible for the development and delivery of the applications and services that power PayPal’s Enterprise Observability platform. You will work closely with product, design, and developments teams to understand what their observability needs are. We're looking for talented, motivated, and detail-oriented technologists with a passion for building robust systems at scale. We value collaboration, communication, and a passion for achieving engineering and product excellence. Meet our team: The Observability Team at PayPal is responsible for providing world-class platform that can collect, ingest, store, alert and visualize data from many different sources in PayPal – like application logs, infrastructure, Virtual Machines, Containers, Network, Load Balancers etc. The platform should provide functionalities that enables different teams in PayPal to gain business insights and debugging/triaging of issues in an easy-to-use and intuitive and self-service manner. The platform should be scalable to support the data needs for PayPal (fortune 500 company); be highly available at 99.9% or higher; be reliable and fault-tolerant across the different physical data centers and thousands for micro services. You’ll work alongside the best and the brightest engineering talent in the industry. You need to be dynamic, collaborative, and curious as we build new experiences and improvise the Observability platform running at a scale few companies can match. Job Description: Your way to impact: As an engineer in our development team, you will be responsible for developing the next gen PayPal's Observability platform, support the long-term reliability and scalability of the system and will be involved in implementations that avoid/minimize the day-to-day support work to keep the systems up and running. If you are passionate about application development, systems design, scaling beyond 99.999% reliability and working in a highly dynamic environment with a team of smart and talented engineers then this is the job for you.You will work closely with product and experience and/or development teams to understand the developer needs around observability and deliver the functions that meets their needs. The possibilities are unlimited for disruptive thinking, and you will have an opportunity to be a part of making history in the niche Observability area. Your Day to Day: As an Software Engineer - Backend you'll contribute to building robust backend systems. You'll collaborate closely with experienced engineers to learn and grow your skills. Develop and maintain backend components. Write clean, efficient code adhering to coding standards. Participate in code reviews and provide feedback. What do you need to Bring 2+ years of backend development experience and a bachelor’s degree in computer science or related field. Strong foundation in programming concepts and data structures. Proficiency in at least one backend language (Java, Python, Ruby on Rails) Proficiency in back-end development utilizing Java EE technologies (Java, application servers, servlet containers, JMS, JPA, Spring MVC, Hibernate) Strong understanding of web services and Service-Oriented Architecture (SOA) standards, including REST, OAuth, and JSON, with experience in Java environments. Experience with ORM (Object-Relational Mapper) tools, working within Java-based solutions like Hibernate. Experience with databases (SQL, NoSQL) Preferred Qualification: Experience with "Observability Pillars - Logs / Metrics / Traces" , Data Streaming Pipelines, Google Dataflow and Kafka Subsidiary: PayPal Travel Percent: 0 PayPal does not charge candidates any fees for courses, applications, resume reviews, interviews, background checks, or onboarding. Any such request is a red flag and likely part of a scam. To learn more about how to identify and avoid recruitment fraud please visit https://careers.pypl.com/contact-us . For the majority of employees, PayPal's balanced hybrid work model offers 3 days in the office for effective in-person collaboration and 2 days at your choice of either the PayPal office or your home workspace, ensuring that you equally have the benefits and conveniences of both locations. Our Benefits: At PayPal, we’re committed to building an equitable and inclusive global economy. And we can’t do this without our most important asset—you. That’s why we offer benefits to help you thrive in every stage of life. We champion your financial, physical, and mental health by offering valuable benefits and resources to help you care for the whole you. We have great benefits including a flexible work environment, employee shares options, health and life insurance and more. To learn more about our benefits please visit https://www.paypalbenefits.com . Who We Are: Commitment to Diversity and Inclusion PayPal provides equal employment opportunity (EEO) to all persons regardless of age, color, national origin, citizenship status, physical or mental disability, race, religion, creed, gender, sex, pregnancy, sexual orientation, gender identity and/or expression, genetic information, marital status, status with regard to public assistance, veteran status, or any other characteristic protected by federal, state, or local law. In addition, PayPal will provide reasonable accommodations for qualified individuals with disabilities. If you are unable to submit an application because of incompatible assistive technology or a disability, please contact us at paypalglobaltalentacquisition@paypal.com . Belonging at PayPal: Our employees are central to advancing our mission, and we strive to create an environment where everyone can do their best work with a sense of purpose and belonging. Belonging at PayPal means creating a workplace with a sense of acceptance and security where all employees feel included and valued. We are proud to have a diverse workforce reflective of the merchants, consumers, and communities that we serve, and we continue to take tangible actions to cultivate inclusivity and belonging at PayPal. Any general requests for consideration of your skills, please Join our Talent Community . We know the confidence gap and imposter syndrome can get in the way of meeting spectacular candidates. Please don’t hesitate to apply.
Posted 6 days ago
0 years
4 - 7 Lacs
Noida
On-site
Company Description Daxko powers wellness to improve lives. Every day our team members focus their passion and expertise in helping health & wellness facilities operate efficiently and engage their members. Whether a neighborhood yoga studio, a national franchise with locations in every city, a YMCA or JCC-and every type of organization in between-we build solutions that make every aspect of running and being a member of a health and wellness organization easier and delightful. Job Description The Senior Engineer I is responsible for developing high quality applications and writing code on a daily basis. This includes heavy collaboration with product managers, architects and other software engineers to build best-in-class software using modern technologies and an agile development process. The Senior Software Engineer focuses on the continued growth of their team and team members. The Senior Software Engineer reports to the Manager, Engineering/Development. You Will also : Be Responsible for defining design patterns and identifying frameworks used in the engineering team’s solutions development work Be Responsible for establishing and guiding the engineering team’s development course Develop high quality applications that provide a delightful user experience and meet business expectations Develop clean, reusable, well-structured and maintainable code following best practices and industry standards Develop elegant, responsive, high performance, cross-platform solutions Develop, debug, and modify components of software applications and tools Write automated unit, integration and acceptance tests as appropriate to support our continuous integration pipelines Support and troubleshoot data and/or system issues as needed Responsible for provding actionable feedback in code reviews Capable of leading system architecture and design reviews Participate in user story creation in collaboration with the team Guide team members to develop prototypes as necessary and validate ideas with a data driven approach Mentor team members in all aspects of the software development process No Travel Required No Budget Responsibilities Qualifications Bachelor’s degree in related field such as Computer Science, Computer Engineer, Applied Mathematics, or Applied Sciences OR equivalent experience Five (5+) years of Software Engineering or other relevant experience Proficient in application development in modern object-oriented programming languages Five (5+) years of experience developing mobile applications in React Native Proficient in building and integrating with web services and RESTful APIs Proficient in SQL or other relational data storage technologies Experience in automated testing practices including unit testing, integration testing, and/or performance testing Experience using code versioning tools such as Git Experience with Agile development methodology Understanding of modern cloud architecture and tools Preferred Education and Experience: Bachelor’s degree or higher (or equivalent) in related field such as Computer Science, Computer Engineer, Applied Mathematics, or Applied Sciences Seven (7+) years of Software Engineering or other relevant experience Experience developing web applications with React Experience with NodeJS and TypeScript Experience with dependency injection frameworks Experience working with Microservices Architecture Experience using Virtualized hosting and delivery (Docker, Kubernetes) Experience working with Realtime Data Streaming (e.g. Kafka, Kinesis) Experience with NoSQL/Non-relational Databases Experience with defining strategies used in an engineering team’s solutions development work Understanding of Serverless Computing (e.g. AWS cloud services) Understanding of AWS Messaging Services (e.g. SNS & SQS) Understanding of DevOps and CI/CD tools (e.g. GitLab CI / Jenkins / Bamboo) Understanding of frontend engineer workflow and build tools such as npm, webpack, etc Additional Information #LI-Hybrid Daxko is dedicated to pursuing and hiring a diverse workforce. We are committed to diversity in the broadest sense, including thought and perspective, age, ability, nationality, ethnicity, orientation, and gender. The skills, perspectives, ideas, and experiences of all of our team members contribute to the vitality and success of our purpose and values. We truly care for our team members, and this is reflected through our offices, and benefits, and great perks. These perks are only for our full-time team members. Some of our favorites include: Hybrid work model Leave entitlements Recently introduced hospitalization/caregiving leaves Paid parental leaves (Maternity, Paternity, & Adoption) ️Group Health Insurance Accidental Insurance Tax-saving reimbursements Provident Fund (PF) Casual work environments Company Events and Celebrations Performance achievement awards Referral bonus Learning & Development opportunities Daxko is dedicated to pursuing and hiring a diverse workforce. We are committed to diversity in the broadest sense, including thought and perspective, age, ability, nationality, ethnicity, orientation, and gender. The skills, perspectives, ideas, and experiences of all of our team members contribute to the vitality and success of our purpose and values. We truly care for our team members, and this is reflected through our offices, and benefits, and great perks. These perks are only for our full-time team members. Some of our favorites include: Hybrid work model Leave entitlements Recently introduced hospitalization/caregiving leaves Paid parental leaves (Maternity, Paternity, & Adoption) ️Group Health Insurance Accidental Insurance Tax-saving reimbursements Provident Fund (PF) Casual work environments Company Events and Celebrations Performance achievement awards Referral bonus Learning & Development opportunities
Posted 6 days ago
3.0 years
5 - 7 Lacs
Noida
On-site
Senior Java Developer (3-4 Years Experience)- Applicant must be from Delhi/NCR only. Advanced Technical Requirements Core Technologies (Must Have) Java : 3-4 years with Java 11+ (Java 21 LTS preferred) Spring Ecosystem : Advanced Spring Boot, Spring Cloud, Spring Security Microservices : Service discovery, API Gateway, distributed systems Database : Advanced PostgreSQL, query optimization, indexing strategies Message Queues : Apache Kafka, event-driven architecture Caching : Redis cluster, distributed caching patterns Spring Cloud Stack Eureka : Service discovery and registration Spring Cloud Gateway : Routing, filtering, load balancing Config Server : Centralized configuration management Circuit Breaker : Resilience4j for fault tolerance Sleuth + Zipkin : Distributed tracing Job Type: Full-time Pay: ₹45,000.00 - ₹60,000.00 per month Work Location: In person Speak with the employer +91 8800602148
Posted 6 days ago
3.0 - 8.0 years
0 Lacs
Indore, Madhya Pradesh, India
On-site
Job Summary: We are seeking a skilled and experienced Q&A Engineer with a strong technical background in networking, automation, API testing, and performance testing. The ideal candidate will have proficiency in Postman API testing, Java programming, and testing frameworks like JMeter, Selenium, REST Assured, and Robot Framework. The candidate familiar with network architecture, including ORAN, SMO, RIC, and OSS/BSS is Plus. Key Responsibilities: Perform functional, performance, and load testing of web applications using tools such as JMeter and Postman. Develop, maintain, and execute automated test scripts using Selenium with Java for web application testing. Design and implement tests for RESTful APIs using REST Assured (Java library) for testing HTTP responses and ensuring proper API functionality. Collaborate with development teams to identify and resolve software defects through effective debugging and testing. Utilize the Robot Framework with Python for acceptance testing and acceptance test-driven development. Conduct end-to-end testing and ensure that systems meet all functional requirements. Ensure quality and compliance of software releases by conducting thorough test cases and evaluating product quality. Required Skill set: Postman API Testing: Experience in testing RESTful APIs and web services using Postman. Experience Range 3 to 8 years Java: Strong knowledge of Java for test script development, particularly with Selenium and REST Assured. JMeter: Experience in performance, functional, and load testing using Apache JMeter. Selenium with Java: Expertise in Selenium WebDriver for automated functional testing, including script development and maintenance using Java. REST Assured: Proficient in using the REST Assured framework (Java library) for testing REST APIs and validating HTTP responses. Robot Framework: Hands-on experience with the Robot Framework for acceptance testing and test-driven development (TDD) in Python. ORAN/SMO/RIC/OSS Architecture: In-depth knowledge of ORAN (Open Radio Access Network), SMO (Service Management Orchestration), RIC (RAN Intelligent Controller), and OSS (Operations Support Systems) architectures. Good to have Skill Set: Networking Knowledge: Deep understanding of networking concepts, specifically around RAN elements and network architectures (ORAN, SMO, RIC, OSS). Monitoring Tools : Experience with Prometheus, Grafana, and Kafka for real-time monitoring and performance tracking of applications and systems. Keycloak: Familiarity with Keycloak for identity and access management.
Posted 6 days ago
7.0 years
9 - 10 Lacs
Noida
Remote
Job Title: Software Development Engineer (SDE) Location: Noida About Us: At Clearwater Analytics, we are on a mission to become the world's most trusted and comprehensive technology platform for investment management, reporting, accounting, and analytics. We partner with sophisticated institutional investors worldwide and are seeking a Software Development Engineer who shares our passion for innovation and client commitment. Role Overview: We are seeking a skilled Software Development Engineer with strong coding and design skills, as well as hands-on experience in cloud technologies and distributed architecture. This role focuses on delivering high-quality software solutions within the FinTech sector, particularly in the Front Office, OEMS, PMS, and Asset Management domains. Key Responsibilities: Design and develop scalable, high-performance software solutions in a distributed architecture environment. Collaborate with cross-functional teams to ensure engineering strategies align with business objectives and client needs. Implement real-time and asynchronous systems with a focus on event-driven architecture. Ensure operational excellence by adhering to best practices in software development and engineering. Present technical concepts and project updates clearly to stakeholders, fostering effective communication. Requirements: 7+ years of hands-on experience in software development, ideally within the FinTech sector. Strong coding and design skills, with a solid understanding of software development principles. Deep expertise in cloud platforms (AWS/GCP/Azure) and distributed architecture. Experience with real-time systems, event-driven architecture, and engineering excellence in a large-scale environment. Proficiency in Java and familiarity with messaging systems (JMS/Kafka/MQ). Excellent verbal and written communication skills. Desired Qualifications: Experience in the FinTech sector, particularly in Front Office, OEMS, PMS, and Asset Management at scale. Bonus: Experience with BigTech, Groovy, Bash, Python, and knowledge of GenAI/AI technologies. What we offer: Business casual atmosphere in a flexible working environment Team-focused culture that promotes innovation and ownership Access cutting-edge investment reporting technology and expertise Defined and undefined career pathways, allowing you to grow your way Competitive medical, dental, vision, and life insurance benefits Maternity and paternity leave Personal Time Off and Volunteer Time Off to give back to the community RSUs, as well as an employee stock purchase plan and a 401 (k) with a match Work from anywhere 3 weeks out of the year Work from home Fridays Why Join Us? This is an incredible opportunity to be part of a dynamic engineering team that is shaping the future of investment management technology. If you're ready to make a significant impact and advance your career, apply now!
Posted 6 days ago
0 years
0 Lacs
Hyderabad, Telangana, India
On-site
We're hiring skilled backend developers to build and scale LiveDesign—our enterprise collaboration platform built on an event-driven microservices architecture with real-time stream processing. This platform enables the execution and analysis of quantum simulations, machine learning models, and other computational methods. It's used in diverse industries from drug researchers seeking to cure disease to materials designers in the fields of organic electronics, polymer science, and other areas. What You’ll Do Day-to-day Design, build, and test high-performance, distributed components in problem areas, including but not limited to data aggregation/transformation/reporting and large-scale computations for a collaborative multi-user application Architect and implement scalable, maintainable solutions using technologies like Kafka and Kubernetes. Contribute to a culture of clean code and continuous learning through regular code reviews. Collaborate closely within a cross-functional, agile team composed of product designers, developers, and testers to deliver features and functionality that meet business and product goals. Who We’re Looking For The ideal candidate should have: Bachelor's/master's degree in computer science or equivalent stream with three to six years of experience in enterprise application development. Practical understanding of CS concepts in the areas of data structures and algorithms, database management systems, operating systems, and computer networks. Excellent programming skills, logical reasoning abilities, and enthusiasm for solving interesting problems, along with a willingness to learn. Experience with event-driven microservices architecture and Kubernetes based deployments. Enthusiasm for solving interesting problems and a willingness to learn new technologies. Proficient interpersonal skills (oral/verbal communication), complemented by an ability to collaborate in a team environment. As an equal opportunity employer, Schrödinger hires outstanding individuals into every position in the company. People who work with us have a high degree of engagement, a commitment to working effectively in teams, and a passion for the company's mission. We place the highest value on creating a safe environment where our employees can grow and contribute, and refuse to discriminate on the basis of race, color, religious belief, sex, age, disability, national origin, alienage or citizenship status, marital status, partnership status, caregiver status, sexual and reproductive health decisions, gender identity or expression, or sexual orientation. To us, "diversity" isn't just a buzzword, but an important element of our core principles and key business practices. We believe that diverse companies innovate better and think more creatively than homogenous ones because they take into account a wide range of viewpoints. For us, greater diversity doesn't mean better headlines or public images - it means increased adaptability and profitability.
Posted 6 days ago
7.0 years
0 Lacs
India
Remote
Role: Neo4j Engineer Overall IT Experience: 7+ years Relevant experience: (Graph Databases: 4+ years, Neo4j: 2+ years) Location: Remote Company Description Bluetick Consultants is a technology-driven firm that supports hiring remote developers, building technology products, and enabling end-to-end digital transformation. With previous experience in top technology companies such as Amazon, Microsoft, and Craftsvilla, we understand the needs of our clients and provide customized solutions. Our team has expertise in emerging technologies, backend and frontend development, cloud development, and mobile technologies. We prioritize staying up-to-date with the latest technological advances to create a long-term impact and grow together with our clients. Key Responsibilities • Graph Database Architecture: Design and implement Neo4j graph database schemas optimized for fund administration data relationships and AI-powered queries • Knowledge Graph Development: Build comprehensive knowledge graphs connecting entities like funds, investors, companies, transactions, legal documents, and market data • Graph-AI Integration: Integrate Neo4j with AI/ML pipelines, particularly for enhanced RAG (Retrieval-Augmented Generation) systems and semantic search capabilities • Complex Relationship Modeling: Model intricate relationships between Limited Partners, General Partners, fund structures, investment flows, and regulatory requirements • Query Optimization: Develop high-performance Cypher queries for real-time analytics, relationship discovery, and pattern recognition • Data Pipeline Integration: Build ETL processes to populate and maintain graph databases from various data sources including FundPanel.io, legal documents, and external market data using domain specific ontologies • Graph Analytics: Implement graph algorithms for fraud detection, risk assessment, relationship scoring, and investment opportunity identification • Performance Tuning: Optimize graph database performance for concurrent users and complex analytical queries • Documentation & Standards: Establish graph modelling standards, query optimization guidelines, and comprehensive technical documentation Key Use Cases You'll Enable • Semantic Search Enhancement: Create knowledge graphs that improve AI search accuracy by understanding entity relationships and context • Investment Network Analysis: Map complex relationships between investors, funds, portfolio companies, and market segments • Compliance Graph Modelling: Model regulatory relationships and fund terms to support automated auditing and compliance validation • Customer Relationship Intelligence: Build relationship graphs for customer relations monitoring and expansion opportunity identification • Predictive Modelling Support: Provide graph-based features for investment prediction and risk assessment models • Document Relationship Mapping: Connect legal documents, contracts, and agreements through entity and relationship extraction Required Qualifications • Bachelor's degree in Computer Science, Data Engineering, or related field • 7+ years of overall IT Experience • 4+ years of experience with graph databases, with 2+ years specifically in Neo4j • Strong background in data modelling, particularly for complex relationship structures • Experience with financial services data and regulatory requirements preferred • Proven experience integrating graph databases with AI/ML systems • Understanding of knowledge graph concepts and semantic technologies • Experience with high-volume, production-scale graph database implementations Technology Skills • Graph Databases: Neo4j (primary), Cypher query language, APOC procedures, Neo4j Graph Data Science library • Programming: Python, Java, or Scala for graph data processing and integration • AI Integration: Experience with graph-enhanced RAG systems, vector embeddings in graph context, GraphRAG implementations • Data Processing: ETL pipelines, data transformation, real-time data streaming (Kafka, Apache Spark) • Cloud Platforms: Neo4j Aura, Azure integration, containerized deployments • APIs: Neo4j drivers, REST APIs, GraphQL integration • Analytics: Graph algorithms (PageRank, community detection, shortest path, centrality measures) • Monitoring: Neo4j monitoring tools, performance profiling, query optimization • Integration: Elasticsearch integration, vector database connections, multi-modal data handling Specific Technical Requirements • Knowledge Graph Construction: Entity resolution, relationship extraction, ontology modelling • Cypher Expertise: Advanced Cypher queries, stored procedures, custom functions • Scalability: Clustering, sharding, horizontal scaling strategies • Security: Graph-level security, role-based access control, data encryption • Version Control: Graph schema versioning, migration strategies • Backup & Recovery: Graph database backup strategies, disaster recovery planning Industry Context Understanding • Fund Administration: Understanding of fund structures, capital calls, distributions, and investor relationships • Financial Compliance: Knowledge of regulatory requirements and audit trails in financial services • Investment Workflows: Understanding of due diligence processes, portfolio management, and investor reporting • Legal Document Structures: Familiarity with LPA documents, subscription agreements, and fund formation documents Collaboration Requirements • AI/ML Team: Work closely with GenAI engineers to optimize graph-based AI applications • Data Architecture Team: Collaborate on overall data architecture and integration strategies • Backend Developers: Integrate graph databases with application APIs and microservices • DevOps Team: Ensure proper deployment, monitoring, and maintenance of graph database infrastructure • Business Stakeholders: Translate business requirements into effective graph models and queries Performance Expectations • Query Performance: Ensure sub-second response times for standard relationship queries • Scalability: Support 100k+ users with concurrent access to graph data • Accuracy: Maintain data consistency and relationship integrity across complex fund structures • Availability: Ensure 99.9% uptime for critical graph database services • Integration Efficiency: Seamless integration with existing FundPanel.io systems and new AI services This role offers the opportunity to work at the intersection of advanced graph technology and artificial intelligence, creating innovative solutions that will transform how fund administrators understand and leverage their data relationships.
Posted 6 days ago
12.0 years
0 Lacs
India
Remote
T Job Title: Product Manager – Content Development & ManagementLocation: Bangalore (Hybrid/Remote options available) Experience Required: 12+ Years (preferably in EdTech, Higher Education, or Technical Training) Job Type: Full-Time About the Role: We are looking for a seasoned Product Manager to lead the development and management of technical learning content across our AI, Data, and Software certification programs. You will be responsible for building high-quality curriculum and managing a team of Subject Matter Experts (SMEs), instructional designers, and content developers. This role requires strong technical depth, instructional design sensibility, and leadership skills to deliver content that meets both academic and industry standards. Key Responsibilities: End-to-End Content Management: Own the full lifecycle of content products—from concept to delivery—across AI, Data Science, Software Engineering, and emerging te ch areas.Curri culum Design: Deve lop and structure modular, scalable course content aligned with certification standards and market demand.Proje ct Leadership: Mana ge timelines, quality assurance, and team output for multiple concurrent content projects.Team Management: Lead and mentor SMEs, trainers, editors, and technical writers to maintain consistency and excellence in output.Hands -On Learning Development: Guid e creation of hands-on labs, real-time projects, assessments, and case studies.Conte nt Review & QA: Cond uct quality checks to ensure accuracy, relevance, and pedagogical effectiveness of content.Colla boration: Work with Product, Marketing, Tech, and Academic teams to align content with platform features and learner outcomes.Techn ology Integration: Over see LMS deployments and content integration with tools like Azure Synapse, Databricks, Spark, Kafka, and Power BI. Required Qualifications: Minimum 12 years of experience in EdTech, technical training, or curriculum development roles. Strong domain expertise in: Data Science, Machine Learning, Deep Learning Programming: Python, Java, C/C++ Azure Data Engineering tools: Synapse, Databricks, Snowflake, Kafka, Spark Experience leading technical teams or SME groups. Proven track record of designing and delivering academic/industry-focused content and training programs. Excellent communication and stakeholder management skills. Preferred Qualifications: Ph.D./M.Tech in Computer Science, IT, or related fields (PhD submission/ongoing is acceptable). Experience working with academic institutions and EdTech platforms. Knowledge of instructional design principles and outcome-based learning. Familiarity with tools like Power BI, Tableau, and LMS platforms. Published research papers in AI/ML or EdTech fields (optional but valued). What We Offer: An opportunity to shape the learning experiences of thousands globally. Freedom to innovate and create impactful educational content. A collaborative environment with a passionate team. Competitive salary and performance-based bonuses. Flexible work arrangements and growth opportunities. How to Apply: Send your resume and a portfolio (if applicable) to [insert your application email]. Subject: Application for Product Manager – Content Development ips: Provide a summary of the role, what success in the position looks like, and how this role fits into the organization overall.
Posted 6 days ago
2.0 years
3 - 10 Lacs
India
Remote
Job Title - Sr. Data Engineer Experience - 2+ Years Location - Indpre (onsite) Industry - IT Job Type - Full ime Roles and Responsibilities- 1. Design and develop scalable data pipelines and workflows for data ingestion, transformation, and integration. 2. Build and maintain data storage systems, including data warehouses, data lakes, and relational databases. 3. Ensure data accuracy, integrity, and consistency through validation and quality assurance processes. 4. Collaborate with data scientists, analysts, and business teams to understand data needs and deliver tailored solutions. 5. Optimize database performance and manage large-scale datasets for efficient processing. 6. Leverage cloud platforms (AWS, Azure, or GCP) and big data technologies (Hadoop, Spark, Kafka) for building robust data solutions. 7. Automate and monitor data workflows using orchestration frameworks such as Apache Airflow. 8. Implement and enforce data governance policies to ensure compliance and data security. 9. Troubleshoot and resolve data-related issues to maintain seamless operations. 10. Stay updated on emerging tools, technologies, and trends in data engineering. Skills and Knowledge- 1. Core Skills: ● Proficient in Python (libraries: Pandas, NumPy) and SQL. ● Knowledge of data modeling techniques, including: ○ Entity-Relationship (ER) Diagrams ○ Dimensional Modeling ○ Data Normalization ● Familiarity with ETL processes and tools like: ○ Azure Data Factory (ADF) ○ SSIS (SQL Server Integration Services) 2. Cloud Expertise: ● AWS Services: Glue, Redshift, Lambda, EKS, RDS, Athena ● Azure Services: Databricks, Key Vault, ADLS Gen2, ADF, Azure SQL ● Snowflake 3. Big Data and Workflow Automation: ● Hands-on experience with big data technologies like Hadoop, Spark, and Kafka. ● Experience with workflow automation tools like Apache Airflow (or similar). Qualifications and Requirements- ● Education: ○ Bachelor’s degree (or equivalent) in Computer Science, Information Technology, Engineering, or a related field. ● Experience: ○ Freshers with strong understanding, internships and relevant academic projects are welcome. ○ 2+ years of experience working with Python, SQL, and data integration or visualization tools is preferred. ● Other Skills: ○ Strong communication skills, especially the ability to explain technical concepts to non-technical stakeholders. ○ Ability to work in a dynamic, research-oriented team with concurrent projects. Job Types: Full-time, Permanent Pay: ₹300,000.00 - ₹1,000,000.00 per year Benefits: Paid sick time Provident Fund Work from home Schedule: Day shift Monday to Friday Weekend availability Supplemental Pay: Performance bonus Ability to commute/relocate: Niranjanpur, Indore, Madhya Pradesh: Reliably commute or planning to relocate before starting work (Preferred) Experience: Data Engineer: 2 years (Preferred) Work Location: In person Application Deadline: 31/08/2025
Posted 6 days ago
3.0 years
0 Lacs
Pune, Maharashtra, India
On-site
Velotio Technologies is a product engineering company working with innovative startups and enterprises. We are a certified Great Place to Work® and recognized as one of the best companies to work for in India. We have provided full-stack product development for 325+ startups across the globe building products in the cloud-native, data engineering, B2B SaaS, IoT & Machine Learning space. Our team of 400+ elite software engineers solves hard technical problems while transforming customer ideas into successful products. We are seeking a highly motivated Quality Assurance (QA) Engineer to join our team and play a critical role in ensuring the quality, performance, and reliability of our product. As a QA Engineer, you will be responsible for testing complex data pipelines, distributed systems, and real-time processing modules that form the backbone of our platform. You will collaborate closely with developers, product managers, and other stakeholders to deliver a robust and scalable product that meets the highest quality standards. Requirements Analyze technical and functional specifications of the Data Highway product to create comprehensive test strategies Develop detailed test plans, test cases, and test scripts for functional, performance, and regression testing Define testing criteria and acceptance standards for data pipelines, APIs, and distributed systems Execute manual and automated tests for various components of the Data Highway, including data ingestion, processing, and output modules Perform end-to-end testing of data pipelines to ensure accuracy, integrity, and scalability.Validate real-time and batch data processing flows to ensure performance and reliability Identify, document, and track defects using tools like JIRA, providing clear and actionable descriptions for developers Collaborate with development teams to debug issues, verify fixes, and prevent regression Perform root cause analysis to identify underlying problems and recommend process improvements Conduct performance testing to evaluate system behavior under various load conditions, including peak usage scenarios Monitor key metrics such as throughput, latency, and resource utilization to identify bottlenecks and areas for optimization Test APIs for functionality, reliability, and adherence to RESTful principles Validate integrations with external systems and third-party services to ensure seamless data flow Work closely with cross-functional teams, including developers, product managers, and DevOps, to align on requirements and testing priorities Participate in Agile ceremonies such as sprint planning, daily stand-ups, and retrospectives to ensure smooth communication and collaboration Provide regular updates on test progress, coverage, and quality metrics to stakeholders Collaborate with automation engineers to identify critical test cases for automation Use testing tools like Postman, JMeter, and Selenium for API, performance, and UI testing as required Assist in maintaining and improving automated test frameworks for the Data Highway product Validate data transformations, mappings, and consistency across data pipelines Ensure the security of data in transit and at rest, testing for vulnerabilities and compliance with industry standards Maintain detailed and up-to-date documentation for test plans, test cases, and defect reports Contribute to user guides and knowledge bases to support product usage and troubleshooting Desired Skills & Experience: Bachelor's degree in Computer Science, Information Technology, or a related field, or equivalent professional experience 3+ years of experience as a Quality Assurance Engineer, preferably in testing data pipelines, distributed systems, or SaaS products Strong understanding of data pipelines, ETL processes, and distributed systems testing Experience with test management and defect-tracking tools like JIRA, TestRail, Zephyr Proficiency in API testing using tools like Postman or SoapUI Familiarity with SQL and database testing for data validation and consistency Knowledge of performance testing tools like JMeter, LoadRunner, or similar Experience with real-time data processing systems like Kafka or similar technologies Familiarity with CI/CD pipelines and DevOps practices Exposure to automation frameworks and scripting languages such as Python or JavaScript Strong analytical and problem-solving skills with attention to detail Excellent communication and collaboration skills to work effectively with cross-functional teams Proactive and self-driven approach to identifying and resolving quality issues Benefits Our Culture : We have an autonomous and empowered work culture encouraging individuals to take ownership and grow quickly Flat hierarchy with fast decision making and a startup-oriented "get things done" culture A strong, fun & positive environment with regular celebrations of our success. We pride ourselves in creating an inclusive, diverse & authentic environment We want to hire smart, curious, and ambitious folks, so please reach out even if you do not have all of the requisite experience. We are looking for engineers with the potential to grow! At Velotio, we embrace diversity. Inclusion is a priority for us, and we are eager to foster an environment where everyone feels valued. We welcome applications regardless of ethnicity or cultural background, age, gender, nationality, religion, disability or sexual orientation.
Posted 6 days ago
0 years
0 Lacs
Andhra Pradesh
On-site
We need a Java Senior Developer with expertise in Spring Boot, Microservices, AWS, Kafka, and Kubernetes to build and maintain high-performance applications. #1) Develop and maintain Java-based microservices using Spring Boot. #2) Integrate with Kafka for event-driven architectures. #3) Deploy and manage applications on AWS (EKS, ECS). #4) Optimize performance using ElastiCache, RDS, etc. #5) Collaborate with architects and DevOps teams. About Virtusa Teamwork, quality of life, professional and personal development: values that Virtusa is proud to embody. When you join us, you join a team of 27,000 people globally that cares about your growth — one that seeks to provide you with exciting projects, opportunities and work with state of the art technologies throughout your career with us. Great minds, great potential: it all comes together at Virtusa. We value collaboration and the team environment of our company, and seek to provide great minds with a dynamic place to nurture new ideas and foster excellence. Virtusa was founded on principles of equal opportunity for all, and so does not discriminate on the basis of race, religion, color, sex, gender identity, sexual orientation, age, non-disqualifying physical or mental disability, national origin, veteran status or any other basis covered by appropriate law. All employment is decided on the basis of qualifications, merit, and business need.
Posted 6 days ago
3.0 years
0 Lacs
Pune, Maharashtra, India
On-site
We're looking for a DevOps Engineer This role is Office Based, Pune Office We are looking for a skilled DevOps Engineer with hands-on experience in Kubernetes, CI/CD pipelines, cloud infrastructure (AWS/GCP), and observability tooling. You will be responsible for automating deployments, maintaining infrastructure as code, and optimizing system reliability, performance, and scalability across environments. In this role, you will… Develop and maintain CI/CD pipelines to automate testing, deployments, and rollbacks across multiple environments. Manage and troubleshoot Kubernetes clusters (EKS, AKS, GKE) including networking, autoscaling, and application deployments. Collaborate with development and QA teams to streamline code integration, testing, and deployment workflows. Automate infrastructure provisioning using tools like Terraform and Helm. Monitor and improve system performance using tools like Prometheus, Grafana, and the ELK stack. Set up and maintain Kibana dashboards, and ensure high availability of logging and monitoring systems. Manage cloud infrastructure on AWS and GCP, optimizing for performance, reliability, and cost. Build unified observability pipelines by integrating metrics, logs, and traces. Participate in on-shift rotations, handling incident response and root cause analysis, and continuously improve automation and observability. Write scripts and tools in Bash, Python, or Go to automate routine tasks and improve deployment efficiency. You’ve Got What It Takes If You Have… 3+ years of experience in a DevOps, SRE, or Infrastructure Engineering role. Bachelor's degree in Computer Science, IT, or related field. Strong understanding of Linux systems, cloud platforms (AWS/GCP), and containerized microservices. Proficiency with Kubernetes, CI/CD systems, and infrastructure automation. Experience with monitoring/logging tools: Prometheus, Grafana, InfluxDB ELK stack (Elasticsearch, Logstash, Kibana) Familiarity with incident management tools (e.g., PagerDuty) and root cause analysis processes. Basic working knowledge of: Kafka – monitoring topics and consumer health ElastiCache/Redis – caching patterns and diagnostics InfluxDB – time-series data and metrics collection Our Culture Spark Greatness. Shatter Boundaries. Share Success. Are you ready? Because here, right now – is where the future of work is happening. Where curious disruptors and change innovators like you are helping communities and customers enable everyone – anywhere – to learn, grow and advance. To be better tomorrow than they are today. Who We Are Cornerstone powers the potential of organizations and their people to thrive in a changing world. Cornerstone Galaxy, the complete AI-powered workforce agility platform, meets organizations where they are. With Galaxy, organizations can identify skills gaps and development opportunities, retain and engage top talent, and provide multimodal learning experiences to meet the diverse needs of the modern workforce. More than 7,000 organizations and 100 million+ users in 180+ countries and in nearly 50 languages use Cornerstone Galaxy to build high-performing, future-ready organizations and people today. Check us out on LinkedIn , Comparably , Glassdoor , and Facebook !
Posted 1 week ago
10.0 years
0 Lacs
Mumbai Metropolitan Region
On-site
Job Description: At Bank of America, we are guided by a common purpose to help make financial lives better through the power of every connection. We do this by driving Responsible Growth and delivering for our clients, teammates, communities and shareholders every day. Being a Great Place to Work is core to how we drive Responsible Growth. This includes our commitment to being an inclusive workplace, attracting and developing exceptional talent, supporting our teammates’ physical, emotional, and financial wellness, recognizing and rewarding performance, and how we make an impact in the communities we serve. Bank of America is committed to an in-office culture with specific requirements for office-based attendance and which allows for an appropriate level of flexibility for our teammates and businesses based on role-specific considerations. At Bank of America, you can build a successful career with opportunities to learn, grow, and make an impact. Join us! Job Description: The Markets Application Production Services (MAPS) group is a global group responsible for the management of production systems across Global Markets Technology. The group works closely with the business and provides application support. The group closely interacts with the development and infrastructure teams to manage all changes to the production environment. MAPS has a strong focus on operational excellence and process improvement. Bank of America Merrill Lynch is looking to hire an experienced Application Support Analyst to join our Global Markets Post Trade Technology and Operations support - Markets Application Production Services Team. You will join a regional team based in several locations whose primary focus will be on providing front line support for Equity, Derivatives, Clearing, and Settlement Applications related to Global Markets Operations & Middle Office. This is an excellent opportunity to join a well-established team; supporting distributed platforms and Oracle based applications while partnering with our development team to rollout support for state-of-the-art, real-time, high availability systems developed with cutting edge technologies. Responsibilities: Deliver application support for in-house applications and vendor products used by Global Markets Operations teams in India and region Triage and Manage production Incidents to restore service as swiftly as possible. Manage clear and crisp incident communications to a variety of stakeholders Adhere and oversee adherence to the enterprise defined standard operating procedures Diagnose and resolve complex issues involving root cause analysis and end to end coordination and support of the problem resolution process Ensure the documentation of problem resolution processes and procedures is maintained to the highest quality and accuracy Ability to correlate events across multiple systems to proactively surface and resolve deep, underlying issues The candidate will be required to look across the entire production environment to aid continuous improvement with the state and supportability of production systems including rotational weekend support and rotational business events support outside of business hours Build and maintain relationships with business users and other stakeholders. Work closely with development and infrastructure teams to ensure that issues and defects are reported and actioned to meet business requirements and timelines Learn, expand, and incorporate application support requirements across global operations teams while building APAC presence with teams across Singapore, Australia, Japan, and India Work closely with other MAPS team members across the Asia Pacific region and globally to ensure consistency in service stands and delivery Skills: Education at degree level in engineering or science discipline 10+ years of strong application support experience in banking/finance industry, especially Markets Desirable to have hands-on work experience at functional or shift lead capacity with excellent understanding of ITIL concepts around Incident, Problem and Change management Willing and able to lead Incidents as they occur. Flexible approach to adapt considering shifting priorities or changing conditions Good Knowledge of infrastructure systems, platforms, databases & middleware Troubleshooting and analyzing logs using Linux command line interfaces, Splunk, Kibana and other monitoring or log aggregation systems / tools is a must. Advanced Excel knowledge Excellent verbal and written communication skills and able to influence, facilitate, and collaborate Strong analytical, problem solving and troubleshooting skills to be able to thrive in a time sensitive and complex production environment Creative and innovative, able to find solutions for continuous improvement, and operational excellence Collaborative team player who can work independently where needed. Comfortable in a multicultural environment across a multi-region production support landscape Ability and experience in leading a matrix functional team would be an advantage Stakeholder management experience and the ability to build relationships and form partnership with the users when dealing with production issues and providing the support service to the user base Good understanding of capacity management and assessment Knowledge of Post-Trade Lifecycle, Trade Processing, Clearing, Matching and Settlement is desirable Customer focus / Client service orientation: An underlying desire to service clients and a motivation to ensure that business needs are met Desired Technical Skills: The candidate must demonstrate strong working knowledge of: OS (Windows/Linux, virtual compute) based infrastructure Database Technologies – Oracle, PL/SQL, SQL Server Scripting Languages –Shell, Bash, Python Monitoring Technology – ITRS Genos / Splunk / Dynatrace etc. Pub/Sub messaging - IBM MQ, Kafka, Tibco EMS, Distributed Event Processing, AMPS Ansible and Autosys scheduling
Posted 1 week ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
39817 Jobs | Dublin
Wipro
19388 Jobs | Bengaluru
Accenture in India
15458 Jobs | Dublin 2
EY
14907 Jobs | London
Uplers
11185 Jobs | Ahmedabad
Amazon
10459 Jobs | Seattle,WA
IBM
9256 Jobs | Armonk
Oracle
9226 Jobs | Redwood City
Accenture services Pvt Ltd
7971 Jobs |
Capgemini
7704 Jobs | Paris,France