Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
3.0 - 7.0 years
0 Lacs
ahmedabad, gujarat
On-site
We are looking for a skilled and motivated C++ Software Engineer to join our Price Team at Trading Technologies. You will play a critical role in developing and enhancing low-latency price generation systems that are essential for our cutting-edge trading platforms. The ideal candidate will have a background in high-performance systems, trading infrastructure, and algorithmic implementations. As a valuable member of the Price Team, you will collaborate with fellow engineers to create solutions that facilitate efficient, real-time pricing strategies for professional derivatives traders. During the Initial Training Period (Estimated 3-6 months; duration may vary based on the candidate's experience), you will: - Work closely with the Price Team to grasp the architecture of our trading platform - Acquire a deep understanding of the price generation system and its integration into our platform - Collaborate with other engineering teams to comprehend business requirements and devise effective solutions - Engage in hands-on training sessions to gain a comprehensive understanding of low-latency, high-performance trading systems - Participate in ongoing code reviews and performance evaluations to enhance technical skills Upon successful completion of the Training Period, your responsibilities will include: - Continuously enhancing and optimizing the price generation systems - Developing new features and enhancing existing components for market data management and pricing infrastructure - Ensuring the utmost system reliability, performance, and scalability under trading conditions - Collaborating with other engineering teams to integrate new pricing models and strategies - Taking ownership of code quality, testing, and performance tuning - Actively participating in design discussions, code reviews, and mentoring junior engineers Qualifications: - Strong C++ development experience, especially in a low-latency, high-performance environment - Knowledge of financial market data feeds - Familiarity with trading systems, financial products, and market dynamics - Experience with multi-threaded and distributed systems - Proficiency in modern C++ standards (C++11/14/17/20) - Exposure to performance optimization techniques and profiling tools - Familiarity with low-latency messaging systems or real-time data streaming - Understanding of multithreading, synchronization, and concurrency - Strong analytical and problem-solving skills - Excellent communication skills with the ability to collaborate in a fast-paced team environment Benefits: - Competitive benefits package including medical, dental, vision - Flexible work schedules with a hybrid work model of 2 days on-site - Generous PTO days, volunteering days, and training days for professional development - Tech resources including a rent-to-own program for laptops and mobile phones - Subsidy contributions toward gym memberships and health/wellness initiatives - Milestone anniversary bonuses - Inclusive and collaborative work culture promoting diversity and inclusion Join us at Trading Technologies, a leading Software-as-a-Service (SaaS) technology platform provider in the global capital markets industry. Our innovative TT platform connects to major international exchanges, offering advanced tools for trade execution, market data solutions, analytics, and more to premier financial institutions worldwide. Be part of a dynamic team that drives excellence in trading technology and shapes the future of the financial industry.,
Posted 2 weeks ago
4.0 - 8.0 years
0 Lacs
chennai, tamil nadu
On-site
The System Analyst (Market Oriented) role requires applying system analysis expertise and market-driven research to elevate the company's competitive edge. Your primary responsibility will be to continuously assess the company's offerings in comparison to competitors, identify gaps, and suggest innovative, technology-driven solutions, particularly in cloud computing, high-performance computing (HPC), and distributed systems. Collaborating closely with product and development teams is essential to steer market leadership through data-backed insights and technical foresight. Your key responsibilities include: - Conducting in-depth market research, competitive benchmarking, and trend analysis to identify platform enhancement opportunities and guide product decisions. - Analyzing and recommending improvements across public cloud platforms, virtualization layers, container platforms, and infrastructure technologies. - Proposing innovative solutions leveraging knowledge of DevOps, AIOps, MLOps, and distributed systems to enhance platform scalability, reliability, and differentiation in the market. - Working closely with product managers, architects, and engineering teams to translate business needs into system requirements and ensure alignment with the product roadmap. - Developing detailed system specifications, UML diagrams, wireframes, and user stories for efficient planning and development. - Defining system-level KPIs, tracking performance metrics, and providing actionable insights to stakeholders for continuous improvement and strategic planning. - Presenting findings, technical analyses, and recommendations in a clear and compelling manner to technical and business stakeholders for informed decision-making. Key Requirements: - Proficiency in cloud computing, high-performance computing (HPC), and distributed systems. - Demonstrated ability to conduct market research and derive strategic, data-driven insights. - Strong communication and collaboration skills for effective cross-functional teamwork and stakeholder engagement. Educational Qualifications: - Bachelor's degree in Computer Science, Information Systems, or a related field. Experience: - 4+ years of experience in system analysis or related roles, with expertise in system architectures and analysis techniques. This role falls under the Software Division category.,
Posted 2 weeks ago
3.0 - 7.0 years
0 Lacs
karnataka
On-site
At Guidewire, we take pride in supporting our customers" mission to safeguard the world's most valuable investments. Insurance plays a crucial role in protecting our homes, businesses, and other assets, providing aid in times of need caused by natural disasters or accidents. Our goal is to provide a platform that enables Property and Casualty (P&C) insurers to offer the necessary products and services for individuals to recover from life's most challenging events. We are seeking a product management professional to join our Analytics and Data Services (ADS) team at Guidewire. The ADS team is dedicated to defining and designing new capabilities for the insurance market through our cutting-edge software solutions. In this role, you will collaborate with a diverse team of 50 engineers, data scientists, and risk modelers to create a dynamic cyber insurance data and analytics product suite. This suite will leverage data and machine learning to address various use cases, including cyber risk underwriting, pricing, enterprise risk management, and cyber threat assessment, which is identified as the #1 risk to US national security. Reporting to the Cyence Product Management team, you will play a vital role in driving innovation within an entrepreneurial culture. You will thrive in an environment where our core values of Integrity, Rationality, and Collegiality are ingrained in our daily operations. As a potential candidate, you should have a background in software, data, and analytics, along with experience working in a fast-paced environment involving multiple teams across different locations. You must be a problem-solver who is enthusiastic about developing top-notch products and overcoming market challenges. Your attention to detail, ability to motivate others, and collaboration skills will be key to supporting various teams, including Platform, UX, modeling, ML, Data Science, Quality Assurance, and GTM. Your responsibilities will include: - Vision: Envisioning innovative solutions, promoting our cyber vision, and driving breakthroughs to simplify complexity. - Technical Mastery: Collaborating with software and data teams, implementing best practices in product management, and owning the end-to-end requirements documentation process. - Product Leadership: Cultivating a culture of curiosity and craftsmanship, inspiring and developing R&D teams, and establishing product goals that drive motivation. - Execution: Achieving business outcomes through forward-thinking products, contributing to the creation and communication of roadmaps, and building trust through transparency and consistent delivery. Qualifications we are looking for: - Minimum 3 years of experience as a product manager, demonstrating a track record of delivering complex team projects on-time and with high quality. - 3+ years of experience in technical data management, integration architecture of cloud solutions, and security. - Strong desire to address complex insurance challenges using a B2B SaaS product model. - Excellent attention to detail and communication skills. - Proactive, focused, and quick to take ownership of tasks. - Familiarity with tools such as Aha, Atlassian suite, databases, prototyping tools, and technical knowledge of big data and cloud technologies is preferred. - Conceptual understanding of microservices, distributed systems, AWS, and the Big Data ecosystem. - Comfortable with data ingestion, cataloging, integration, and enrichment concepts. - Previous experience with B2B SaaS companies and software engineering is advantageous. - Bachelor's or Master's degree in engineering, analytics, mathematics, or software development. - Ability to overlap at least 2 hours with US time zones 3-4 days a week. About Guidewire: Guidewire is the trusted platform for P&C insurers to engage, innovate, and grow efficiently. Our platform integrates digital, core, analytics, and AI services, delivered as a cloud service. With over 540 insurers in 40 countries relying on Guidewire, we support new ventures as well as the largest and most complex organizations worldwide. As a partner to our customers, we continuously evolve to ensure their success. With a remarkable implementation track record of over 1600 successful projects, supported by the industry's largest R&D team and partner ecosystem, we are dedicated to accelerating integration, localization, and innovation through our Marketplace. For more information, please visit www.guidewire.com and follow us on Twitter: @Guidewire_PandC.,
Posted 2 weeks ago
3.0 - 12.0 years
0 Lacs
maharashtra
On-site
As an Engineering Lead at Aithon Solutions, a dynamic organization specializing in operations and technology solutions for the alternative asset management sector, you will play a pivotal role in steering the development of our Generative AI products. Your primary mandate will involve overseeing the entire product lifecycle to ensure scalability, top-notch performance, and continuous innovation. Collaborating with diverse teams, you will champion engineering excellence and cultivate a culture that thrives on high performance. Your role demands a profound understanding of AI/ML, proficiency in Python coding, expertise in distributed systems, cloud architectures, and contemporary software engineering practices. Moreover, hands-on coding experience is crucial for this position, ensuring a comprehensive grasp of the technical landscape you will be navigating. Key Responsibilities: - **Technical Leadership & Strategy:** Craft and execute the technology roadmap for our Generative AI products, aligning technological advancements with overarching business objectives. - **AI/ML Product Development:** Drive the development of AI-powered products, fine-tuning models for optimal performance, scalability, and real-world applicability. - **Engineering Excellence:** Establish industry best practices in software development, DevOps, MLOps, and cloud-native architectures to foster a culture of continuous improvement. - **Team Leadership & Scaling:** Recruit, mentor, and supervise a proficient engineering team, nurturing a culture characterized by innovation and collaborative spirit. - **Cross-Functional Collaboration:** Collaborate closely with Product, Data Science, and Business teams to translate cutting-edge AI research into practical applications. - **Scalability & Performance Optimization:** Architect and enhance distributed systems to ensure the efficient deployment of AI models across cloud and edge environments. - **Security & Compliance:** Implement robust frameworks for AI ethics, data security, and compliance with industry standards and regulations. Qualifications & Skills: - Demonstrable experience of at least 12 years in software engineering, with a minimum of 3 years in leadership positions within AI-focused enterprises. - Profound expertise in Generative AI, Deep Learning, NLP, Computer Vision, and model deployment. - Hands-on familiarity with ML frameworks and leading cloud platforms such as AWS, GCP, and Azure. - Proven track record of scaling AI/ML infrastructure and optimizing models for superior performance and cost-effectiveness. - Comprehensive understanding of distributed systems, cloud-native architectures, and microservices. - Proficiency in MLOps, CI/CD, and DevOps practices. - Strong problem-solving acumen, strategic thinking, and adept stakeholder management skills. - Ability to attract, nurture, and retain top engineering talent in a competitive marketplace. Why Join Us - Lead a forward-thinking team at the forefront of Generative AI innovation. - Contribute to the development of scalable, high-impact AI products that are shaping the future of technology. - Embrace a truly entrepreneurial culture that champions imagination, innovation, and teamwork. - Collaborate with a high-caliber, collaborative team dedicated to driving excellence in the tech industry.,
Posted 2 weeks ago
8.0 - 12.0 years
0 Lacs
noida, uttar pradesh
On-site
As part of Adobe, you will play a crucial role in advancing digital experiences for individuals and organizations worldwide. Adobe is dedicated to providing tools and resources for creating exceptional digital content, empowering users to design and deliver impactful images, videos, and applications that revolutionize customer interactions across various platforms. We are committed to fostering a diverse and inclusive work environment where every individual is valued, respected, and offered equal opportunities. At Adobe, we believe that innovation can come from anyone within the organization, and we encourage all employees to contribute their ideas and insights to drive our mission forward. Join our team at Adobe and immerse yourself in a globally recognized work environment that thrives on creativity and collaboration. **The Opportunity** In the Technical Communication group at Adobe, we are developing a cutting-edge Component Content Management System that drives the creation and delivery of structured content for large enterprises. Our system plays a vital role in enabling new age experiences like Chatbots, Voice Based Devices, and Omni-Channel content delivery. We are seeking a dynamic and technically proficient leader to help us realize our vision and enhance our product's capabilities for our customers. **About the Team** The AEM Guides team at Adobe focuses on a modern technology CCMS that caters to Fortune-500 companies publishing millions of documents using our product. As part of this team, you will experience a startup-like environment within a larger organization, collaborating closely with various functions to meet the needs of our enterprise customers. The team is rapidly expanding, offering exciting opportunities for growth and innovation. **The Challenge** We are looking for a highly motivated SDET with expertise in CI/CD processes to drive automation testing and integration within our continuous deployment pipelines. Your primary responsibility will be to design and implement automated testing strategies, working closely with development, operations, and release engineering teams to ensure the consistent delivery of high-quality software. **Roles & Responsibilities** As an individual contributor, your key responsibilities will include: - Designing and developing automation frameworks for web, API, and backend testing, suggesting enhancements as needed. - Integrating automated test suites into CI/CD pipelines like Jenkins, GitLab CI, and CircleCI for efficient and early testing. - Collaborating with development teams to embed quality practices throughout the software development lifecycle, fostering strong interpersonal relationships. - Demonstrating expertise in cloud infrastructure, with experience in AWS, Azure, and other cloud platforms. - Utilizing tools like Splunk for effective logging and monitoring of test processes. - Automating test data setup and ensuring consistency across different test environments. - Mentoring junior engineers and contributing to the team's technical growth. **Required Skills & Expertise** - Education: Bachelor's degree in Computer Science, Engineering, Information Technology, or related field. - Experience: 8+ years in SDET or QA Automation roles with a focus on CI/CD and cloud infrastructure. - Testing Expertise: Proficiency in test automation principles and tools like Selenium, Cypress, JUnit, TestNG, or Cucumber. - CI/CD Tools: Extensive experience with Jenkins, GitLab CI, CircleCI, Travis CI, Azure DevOps, etc. - Strong understanding of algorithms and data structures. - Proficiency in programming languages like Java, Python, JavaScript, or Go for test script creation. - Knowledge of Docker and Kubernetes for managing testing environments. - Exposure to monitoring tools like ELK Stack, Prometheus, or Grafana. - Comfortable working in Agile/DevOps environments with a focus on collaboration and automation. - Excellent communication and collaboration skills for cross-functional teamwork. - Experience in testing microservices-based architectures and distributed systems. Join us at Adobe and be part of a team that is shaping the future of digital experiences. Explore more about Adobe's culture, benefits, and corporate social responsibility initiatives to see how you can make an impact with us. If you require accommodations to access our website or complete the application process, please reach out to accommodations@adobe.com or call (408) 536-3015.,
Posted 2 weeks ago
4.0 - 6.0 years
6 - 8 Lacs
Mumbai
Work from Office
We are looking for a highly skilled Senior Software Engineer with expertise in C#, WinForms, and network programming, preferably with experience in trading applications. The ideal candidate will be responsible for designing and building responsive desktop-based trading interfaces, integrating network feeds, and ensuring robust real-time performance. Role Expectations : UI Development : - Design and maintain high-performance WinForms-based trading interfaces using C# and .NET Framework (4.0/4.2). - Implement complex UI components such as DataGridView, custom controls, and dynamic forms/dialogs. - Follow best practices in OOP, including use of interfaces, abstract classes, and design patterns like Observer and Factory. - Debug, test, and enhance multi-threaded UI components for performance and stability. - Handle data binding and ensure smooth user interactions across trading modules. DLL and Library Integration : - Develop and integrate custom DLLs (managed/unmanaged) for reusable business logic and UI enhancements. - Utilize third-party WinForms libraries for advanced UI features. Tools and Technologies : - Version control: Git or TFS. - Database: SQL Server (for data storage and retrieval). - Logging and exception handling in distributed system environments. - Proficiency with AI-powered tools such as GitHub Copilot and ChatGPT. - Prompt engineering skills to utilize AI for development, testing, and optimization workflows. Must Have : - Understanding of Equity Markets, Derivatives, and Order Management Systems (OMS). - Familiarity with Indian stock exchanges (e.g., NSE, BSE). - Experience working with market feeds (e.g., FIX, TCP-based protocols). - Proven exposure to real-time trading applications and data processing systems. Qualifications : - Bachelors or Masters degree in Computer Science, Engineering, or related field. - Experience in high-frequency trading or low-latency systems is a plus.
Posted 2 weeks ago
5.0 - 8.0 years
11 - 15 Lacs
Gurugram
Work from Office
Key Responsibilities : - Design, develop, and maintain scalable and secure backend applications using Java and Spring Boot 3.2 - Develop RESTful APIs and integrate with third-party services and internal systems - Work on Spring Batch for handling scheduled or high-volume background jobs - Design and develop microservices and ensure inter-service communication and data consistency - Build and maintain an intuitive portal/dashboard for internal/external stakeholders with appropriate backend logic - Optimize queries and work closely with the Oracle DB, writing complex SQL queries and performance tuning - Collaborate with front-end developers, QA engineers, and product managers to deliver high-quality solutions - Conduct code reviews and provide guidance to junior developers - Troubleshoot and debug application issues, perform root cause analysis, and implement effective solutions - Write clear, maintainable, and testable code with appropriate unit and integration tests - Take ownership of features from design to deployment and support - Participate in Agile ceremonies and contribute to sprint planning and retrospectives Required Skills and Experience : - 57 years of proven experience as a Java backend developer - Strong programming skills in Java with a deep understanding of object-oriented programming - Extensive experience in Spring Boot 3.2, Spring Batch, and Spring Job Scheduling - Proficiency in developing and consuming RESTful APIs - Hands-on experience with Microservices architecture and distributed systems - Solid experience in working with Oracle Database and writing optimized SQL queries - Experience in integrating backend services with front-end portals or dashboards - Strong understanding of software engineering best practices including coding standards, code reviews, source control management, build processes, testing, and operations - Excellent analytical and problem-solving skills must be able to analyze complex business requirements and build logical solutions - Familiarity with tools like Git, Maven/Gradle, Jenkins, and containerization platforms (Docker/Kubernetes) is a plus - Good communication and collaboration skills to work effectively in a team environment Nice to Have : - Experience in performance tuning and application profiling - Exposure to CI/CD pipelines and DevOps practices - Knowledge of front-end technologies (basic level) for better integration with backend Educational Qualification : - Bachelors or Masters degree in Computer Science, Engineering, or related field
Posted 2 weeks ago
5.0 - 10.0 years
7 - 17 Lacs
Gurugram
Work from Office
Job Title : Kafka Integration Specialist Job Description : We are seeking a highly skilled Kafka Integration Specialist to join our team. The ideal candidate will have extensive experience in designing, developing, and integrating Apache Kafka solutions to support real-time data streaming and distributed systems. Key Responsibilities : - Design, implement, and maintain Kafka-based data pipelines. - Develop integration solutions using Kafka Connect, Kafka Streams, and other related technologies. - Manage Kafka clusters, ensuring high availability, scalability, and performance. - Collaborate with cross-functional teams to understand integration requirements and deliver robust solutions. - Implement best practices for data streaming, including message serialization, partitioning, and replication. - Monitor and troubleshoot Kafka performance, latency, and security issues. - Ensure data integrity and implement failover strategies for critical data pipelines. Required Skills : - Strong experience in Apache Kafka (Kafka Streams, Kafka Connect). - Proficiency in programming languages like Java, Python, or Scala. - Experience with distributed systems and data streaming concepts. - Familiarity with Zookeeper, Confluent Kafka, and Kafka Broker configurations. - Expertise in creating and managing topics, partitions, and consumer groups. - Hands-on experience with integration tools such as REST APIs, MQ, or ESB. - Knowledge of cloud platforms like AWS, Azure, or GCP for Kafka deployment. Nice to Have : - Experience with monitoring tools like Prometheus, Grafana, or Datadog. - Exposure to DevOps practices, CI/CD pipelines, and infrastructure automation. - Knowledge of data serialization formats like Avro, Protobuf, or JSON. Qualifications : - Bachelor's degree in Computer Science, Information Technology, or related field. - 4+ years of hands-on experience in Kafka integration projects.
Posted 2 weeks ago
3.0 - 10.0 years
0 Lacs
chennai, tamil nadu
On-site
As the Head of Engineering for our AI-powered Autonomous Cloud Portal, you will play a crucial role in leading multiple engineering teams through the full software development life cycle. Your responsibilities will include fostering an innovative engineering culture, defining and executing technical strategies, collaborating with various teams, establishing agile practices, and ensuring compliance with security and regulatory standards. In this leadership position, you will be responsible for building, mentoring, and overseeing cross-functional teams encompassing backend, frontend, AI/ML, QA, and DevOps. Your focus will be on promoting innovation, ownership, quality, and continuous improvement within the engineering organization. Additionally, you will drive the recruitment, training, and performance management processes for engineering personnel. Your expertise in cloud-native systems, AI/ML integration, and DevOps practices will be essential for defining and executing the engineering roadmap aligned with business objectives. You will be tasked with upholding best practices in architecture, security, scalability, testing, and continuous integration/continuous deployment (CI/CD). Regularly reviewing technical designs and code to ensure the feasibility and high standards of AI-driven features will also be a key aspect of your role. Collaboration with Product Management, Solution Architects, and Cloud Architects will be imperative to deliver seamless features. You will be responsible for ensuring effective coordination between teams for planning, sprints, releases, and documentation, as well as managing stakeholder communications related to technical deliverables and team velocity. Establishing Agile practices, conducting sprint reviews and retrospectives, and implementing engineering key performance indicators (KPIs), quality assurance (QA) standards, and incident handling processes will fall under your purview. Furthermore, you will oversee compliance with security and regulatory requirements in cloud environments. To qualify for this role, you should possess a Bachelor's or Master's degree in Computer Science, Engineering, or a related field, along with at least 10 years of engineering experience, including a minimum of 3 years in leadership positions. Your technical expertise should include a strong background in Python, FastAPI/Django, and DevOps pipelines, as well as familiarity with system design, microservices, AI integration, and distributed systems. Experience in building AI/ML-enabled enterprise platforms, knowledge of cloud platforms such as AWS, Azure, or GCP, familiarity with compliance standards like ISO, NIST, SOC2, and knowledge of observability platforms like Grafana, Prometheus, and ELK would be advantageous. Join us to lead the development of an AI cloud automation platform, build high-performing engineering teams, and make a significant impact on architecture, product delivery, and AI integration within a cutting-edge technology environment.,
Posted 2 weeks ago
5.0 - 9.0 years
0 - 0 Lacs
karnataka
On-site
As a key member of our team, you will have a significant impact on the technical direction and infrastructure of our platform. Collaborating with engineers, designers, and founders, you will play a hands-on role in shaping our product and company values. This is a unique opportunity to make a lasting impression in a dynamic early-stage environment where your contributions can truly define our future. Responsibilities: - Design and implement the infrastructure in close collaboration with the team. - Work with engineers and designers to create a user-friendly platform that meets customer needs. - Define and execute the technical roadmap, balancing innovation and technical debt to achieve business goals. - Develop and maintain development processes, tools, and workflows. - Contribute to the product roadmap. Qualifications: - Bachelor's or Master's degree in Computer Science, Engineering, or related field. - 5+ years of experience as a DevOps engineer, specializing in cloud-native infrastructures. - Proficiency in AWS, GCP, and Azure, with knowledge of cost and billing. - Familiarity with CloudFormation, Terraform, Kubernetes, Docker, and distributed systems. - AWS, GCP, or Azure certifications are a plus. - Strong communication skills and ability to collaborate with cross-functional teams. - Experience in creating prototypes and rapid experimentation. Benefits: - Competitive salary within local market standards. - Performance-based bonuses. - Company equity options. - Comprehensive healthcare coverage. - Unlimited paid time off (subject to manager approval). - Quarterly team retreats and offsites. - Flexible work arrangements, including hybrid/remote options. Compensation: 2,200,000 INR - 3,500,000 INR.,
Posted 2 weeks ago
8.0 - 12.0 years
0 Lacs
karnataka
On-site
As a full stack developer at our company, you will be part of a dynamic team working on building a workflow automation system to streamline manual processes. Your role will involve working on multiple web applications, delivering features within agreed timelines, and ensuring high-quality deliveries. You will establish best practices for software development, mentor junior engineers, and collaborate with non-technical stakeholders to align engineering with business requirements. Additionally, you will guide developers in adopting the system and improving best practices. You must have a minimum of 8 years of programming experience with a focus on Javascript, including a track record of managing diverse deliveries and experience in design systems at scale. Your strong design and architectural skills will be crucial in developing highly scalable web applications that cater to millions of users. You should be adept at building adaptive UI components for web and mobile platforms, driving technical direction for design systems, and understanding the full software development lifecycle. Excellent communication skills are essential for this role, along with experience in startups, product-based companies, or hyper-growth environments. Your expertise in ReactJS, RESTful APIs, microservices, and distributed systems will be valuable, as well as your knowledge of Typescript, Redux, Redux Saga, Jest, HTML, and responsive design. Familiarity with Scrum Agile methodology, NodeJS, Backend For Frontend (BFF), Config Driven Design, and Microfrontends is preferred. Experience in native mobile development, Figma, Storybook, and documentation tools would also be advantageous. You should hold a BE/B.Tech or equivalent degree in Computer Science or a related field. If you are passionate about enhancing developer experience and creating scalable, reusable UI components, we encourage you to join our team and contribute to fulfilling your career aspirations.,
Posted 2 weeks ago
2.0 - 6.0 years
0 Lacs
telangana
On-site
The ideal candidate should have expertise in back-end programming languages like JavaScript and TypeScript, with a special focus on NodeJs. You should be proficient in writing HTML, CSS, and JavaScript. Experience working with RDBMS systems like Postgres or MySQL is required. You should have a strong background in implementing testing platforms and unit tests, as well as a good understanding of Git. An appreciation for clean and well-documented code is essential. Understanding of microservices and distributed systems is a plus. Experience in deploying applications using any cloud infrastructure is desirable. Familiarity with building pipelines using CI/CD tools such as Jenkins is a bonus. The candidate should have 2 to 5 years of relevant experience. If you meet these requirements and are interested in the position, please send your resumes to hr@rivan.in. We offer flexible working hours, allowing you to work at any time as we focus on the quality of work rather than the number of login hours. Additionally, upon successful completion of the internship period, you will receive an Internship Certificate from the company. We are also happy to provide a Letter of Recommendation based on your performance during your time with us.,
Posted 2 weeks ago
3.0 - 7.0 years
0 Lacs
noida, uttar pradesh
On-site
The contextualization platform enables large-scale data integration and entity matching across heterogeneous sources. The current engineering focus is to modernize the architecture for better scalability and orchestration compatibility, refactor core services, and lay the foundation for future AI-based enhancements. This is a pivotal development initiative with clear roadmap milestones and direct alignment with a multi-year digital transformation strategy. We are looking for a skilled and motivated Senior Backend Engineer with strong expertise in Kotlin to join a newly established scrum team responsible for enhancing a core data contextualization platform. This service plays a central role in associating and matching data from diverse sources - time series, equipment, documents, 3D objects - into a unified data model. You will lead backend development efforts to modernize and scale the platform by integrating with an updated data architecture and orchestration framework. This is a high-impact role contributing to a long-term roadmap focused on scalable, maintainable, and secure industrial software. Key Responsibilities: - Design, develop, and maintain scalable, API-driven backend services using Kotlin. - Align backend systems with modern data modeling and orchestration standards. - Collaborate with engineering, product, and design teams to ensure seamless integration across the broader data platform. - Implement and refine RESTful APIs following established design guidelines. - Participate in architecture planning, technical discovery, and integration design for improved platform compatibility and maintainability. - Conduct load testing, improve unit test coverage, and contribute to reliability engineering efforts. - Drive software development best practices including code reviews, documentation, and CI/CD process adherence. - Ensure compliance with multi-cloud design standards and use of infrastructure-as-code tooling (Kubernetes, Terraform). Qualifications: - 3+ years of backend development experience, with a strong focus on Kotlin. - Proven ability to design and maintain robust, API-centric microservices. - Hands-on experience with Kubernetes-based deployments, cloud-agnostic infrastructure, and modern CI/CD workflows. - Solid knowledge of PostgreSQL, Elasticsearch, and object storage systems. - Strong understanding of distributed systems, data modeling, and software scalability principles. - Excellent communication skills and ability to work in a cross-functional, English-speaking environment. - Bachelor's or Master's degree in Computer Science or related discipline. Bonus Qualifications: - Experience with Python for auxiliary services, data processing, or SDK usage. - Knowledge of data contextualization or entity resolution techniques. - Familiarity with 3D data models, industrial data structures, or hierarchical asset relationships. - Exposure to LLM-based matching or AI-enhanced data processing (not required but a plus). - Experience with Terraform, Prometheus, and scalable backend performance testing. About The Role And Key Responsibilities: - Develop Data Fusion - a robust, state-of-the-art SaaS for industrial data. - Solve concrete industrial data problems by designing and implementing delightful APIs and robust services on top of Data Fusion. - Work with distributed open-source software such as Kubernetes, Kafka, Spark, and similar to build scalable and performant solutions. - Help shape the culture and methodology of a rapidly growing company. ,
Posted 2 weeks ago
5.0 - 9.0 years
0 Lacs
noida, uttar pradesh
On-site
The contextualization platform enables large-scale data integration and entity matching across heterogeneous sources. Our current engineering focus is on modernizing the architecture for better scalability and orchestration compatibility, refactoring core services, and laying the foundation for future AI-based enhancements. This pivotal development initiative aligns directly with a multi-year digital transformation strategy and has clear roadmap milestones. We are searching for a skilled and motivated Senior Backend Engineer with strong expertise in Kotlin to join our newly established scrum team responsible for enhancing a core data contextualization platform. This service is crucial in associating and matching data from diverse sources such as time series, equipment, documents, and 3D objects into a unified data model. As a Senior Backend Engineer, you will lead backend development efforts to modernize and scale the platform by integrating with an updated data architecture and orchestration framework. This role is high-impact, contributing to a long-term roadmap focused on scalable, maintainable, and secure industrial software. Your key responsibilities will include designing, developing, and maintaining scalable, API-driven backend services using Kotlin, aligning backend systems with modern data modeling and orchestration standards, collaborating with engineering, product, and design teams for seamless integration, implementing and refining RESTful APIs, participating in architecture planning, technical discovery, and integration design, conducting load testing, improving unit test coverage, driving software development best practices, and ensuring compliance with multi-cloud design standards. To qualify for this role, you should have at least 5 years of backend development experience with a strong focus on Kotlin, the ability to design and maintain robust, API-centric microservices, hands-on experience with Kubernetes-based deployments, cloud-agnostic infrastructure, and modern CI/CD workflows, solid knowledge of PostgreSQL, Elasticsearch, and object storage systems, a strong understanding of distributed systems, data modeling, and software scalability principles, excellent communication skills, and a degree in Computer Science or a related discipline. Bonus qualifications include experience with Python, knowledge of data contextualization or entity resolution techniques, familiarity with 3D data models, industrial data structures, or hierarchical asset relationships, exposure to LLM-based matching or AI-enhanced data processing, experience with Terraform, Prometheus, and scalable backend performance testing. In this role, you will develop Data Fusion, a robust SaaS for industrial data, and work on solving concrete industrial data problems by designing and implementing APIs and services on top of Data Fusion. You will collaborate with application teams to ensure a delightful user experience and work with open-source software like Kubernetes, Kafka, Spark, databases such as PostgreSQL and Elasticsearch, and storage systems like S3-API-compatible blob stores. At GlobalLogic, we offer a culture of caring, learning and development opportunities, interesting and meaningful work, balance and flexibility, and a high-trust organization where integrity is key. Join us as we continue to collaborate with clients in transforming businesses and redefining industries through intelligent products, platforms, and services.,
Posted 2 weeks ago
2.0 - 6.0 years
0 Lacs
karnataka
On-site
The role of Software Engineer II based in Bangalore with a 57 years of experience and immediate joiners notice period involves working in a fast-growing engineering team to develop scalable, secure, and cloud-native backend systems. As a Software Engineer II, you will be responsible for designing and developing backend services using Python (Flask/FastAPI), building scalable systems with robust algorithms and data structures, participating in architecture and design discussions, reviewing code for quality, performance, and security, troubleshooting and resolving complex system issues, working with SQL and NoSQL databases (MySQL, MongoDB), creating cloud-native apps on Microsoft Azure, utilizing Docker and Kubernetes for containerization and orchestration, documenting systems and sharing knowledge internally, and optionally integrating Generative AI models and pipelines while advocating for best practices in DevOps, testing, and system design. The key responsibilities of this position include developing backend services using Python (Flask/FastAPI), designing scalable systems with strong algorithms and data structures, participating in architecture and design discussions, reviewing code for quality, performance, and security, troubleshooting and fixing complex system issues, working with SQL and NoSQL databases (MySQL, MongoDB), building cloud-native apps using Microsoft Azure, using Docker and Kubernetes for containerization and orchestration, documenting systems and sharing knowledge internally, and optionally integrating Generative AI models and pipelines while promoting best practices in DevOps, testing, and system design. The required skills for this role encompass strong Python proficiency with Flask or FastAPI, a solid understanding of data structures, algorithms, and Object-Oriented Programming (OOP), experience with distributed systems and REST APIs, knowledge of MySQL and MongoDB, hands-on experience with Azure cloud services, proficiency in Docker and Kubernetes, good debugging and performance tuning abilities, familiarity with microservices and system architecture, and strong communication and collaboration skills. Preferred skills for this position include experience with GenAI integration, familiarity with CI/CD and Infrastructure-as-Code (Terraform, Azure DevOps), knowledge of observability tools (Prometheus, Grafana, ELK), and a background in SaaS or high-scale backend systems. Mandatory technical skills required for this role include proficiency in Python programming language, experience with Flask or FastAPI frameworks, understanding of core concepts such as Data Structures, Algorithms, and Object-Oriented Programming (OOP), expertise in System Design including Distributed Systems and RESTful APIs, familiarity with Relational databases like MySQL or SQL Server and NoSQL databases like MongoDB, knowledge of Microsoft Azure cloud platform (Compute, Storage, Networking, Monitoring), hands-on experience with Docker and Kubernetes for containerization and orchestration, strong debugging and performance tuning skills, and comprehension of Microservices-based architecture. Join us in this exciting opportunity to contribute to building cutting-edge backend systems and leveraging innovative technologies in a collaborative and dynamic environment. Regards, Daina Infosys BPM Recruitment team,
Posted 2 weeks ago
0.0 - 4.0 years
0 Lacs
pune, maharashtra
On-site
As a software engineer at Google, you will have the opportunity to work on cutting-edge technologies that impact how billions of users connect, explore, and engage with information globally. The products you work on will be required to handle data at a massive scale, going beyond traditional web search. We are seeking individuals who can bring innovative ideas from diverse backgrounds such as information retrieval, distributed computing, system design, networking, security, artificial intelligence, and more. You will be involved in projects critical to Google's needs, with the flexibility to switch teams and projects as both you and the fast-paced business evolve. Versatility, leadership, and a passion for tackling new challenges across the full technology stack are essential qualities we look for in our engineers as we strive to advance technology continually. As a pivotal member of a dynamic team, your responsibilities will include designing, testing, deploying, and maintaining software solutions. Google values engineers with a wide range of technical skills who are eager to address some of the most significant technological challenges and make a meaningful impact on users worldwide. Our engineers work not only on search enhancements but also on scalability solutions, large-scale applications, and innovative platforms for developers across diverse Google products. Your role will involve researching, conceptualizing, and developing software applications to enhance and expand Google's product portfolio. You will contribute to various projects that leverage technologies like natural language processing, artificial intelligence, data compression, machine learning, and search technologies. Collaboration on scalability issues related to data access and information retrieval will be a key part of your responsibilities, along with solving complex challenges presented to you. If you have a Bachelor's degree or equivalent practical experience and a background in Unix/Linux environments, distributed systems, machine learning, information retrieval, and TCP/IP, along with programming skills in C, C++, Java, or Python, we encourage you to apply. A Bachelor's or advanced degree in Computer Science, Computer Engineering, or a related field is preferred. By joining Google, you will be part of an engineering-driven organization that fosters innovation and offers opportunities to work on groundbreaking technologies that shape the future of global connectivity and information access.,
Posted 2 weeks ago
8.0 - 12.0 years
0 Lacs
indore, madhya pradesh
On-site
You should have expert-level proficiency in Python and Python frameworks or Java. You must have hands-on experience with AWS Development, PySpark, Lambdas, Cloud Watch (Alerts), SNS, SQS, CloudFormation, Docker, ECS, Fargate, and ECR. Deep experience with key AWS services like Compute (PySpark, Lambda, ECS), Storage (S3), Databases (DynamoDB, Snowflake), Networking (VPC, 53, CloudFront, API Gateway), DevOps/CI-CD (CloudFormation, CDK), Security (IAM, KMS, Secrets Manager), Monitoring (CloudWatch, X-Ray, CloudTrail), and NoSQL Databases like Cassandra, PostGreSQL is required. You should have very strong hands-on knowledge of using Python for integrations between systems through different data formats. Expertise in deploying and maintaining applications in AWS, along with hands-on experience in Kinesis streams and Auto-scaling, is essential. Designing and implementing distributed systems and microservices, and following best practices for scalability, high availability, and fault tolerance are key responsibilities. Strong problem-solving and debugging skills are necessary for this role. You should also have the ability to lead technical discussions and mentor junior engineers. Excellent written and verbal communication skills are a must. Comfort working in agile teams with modern development practices and collaborating with business and other teams to understand business requirements and work on project deliverables is expected. Participation in requirements gathering, understanding, designing a solution based on available framework and code, and experience with data engineering tools or ML platforms (e.g., Pandas, Airflow, SageMaker) are required. An AWS certification such as AWS Certified Solutions Architect or Developer is preferred. This position is based in multiple locations including Indore, Mumbai, Noida, Bangalore, Chennai in India. Qualifications: - Bachelor's degree or foreign equivalent required from an accredited institution. Consideration will be given to three years of progressive experience in the specialty in lieu of every year of education. - At least 8+ years of Information Technology experience.,
Posted 2 weeks ago
5.0 - 9.0 years
0 Lacs
pune, maharashtra
On-site
You are a Lead Scala Developer with 48 years of development experience, specializing in Akka or LAGOM frameworks. Your expertise lies in building scalable microservices using Scala, Akka, and/or LAGOM, as well as containerized applications with Docker and Kubernetes. You have practical experience in managing real-time messaging with Apache Pulsar and integrating databases using Slick Connector and PostgreSQL. Additionally, you are proficient in enabling search and analytics features with ElasticSearch and collaborating with GitLab CI/CD pipelines for deployment workflows. Your role involves developing and maintaining scalable backend systems for data-intensive applications, emphasizing high performance, innovation, and clean code. You will work on real-time, distributed systems, requiring a deep understanding of microservices architecture and tools such as Apache Pulsar, ElasticSearch, and Kubernetes. Collaboration across teams is crucial as you write clean, well-structured, and maintainable code. Your background includes a Bachelor's or Master's degree in Computer Science, Engineering, or a related technical field. You possess strong software engineering and documentation skills, ensuring the quality and efficiency of your work. While experience with Kafka or RabbitMQ, monitoring/logging tools, frontend frameworks like React or Angular, and cloud platforms like AWS, GCP, or Azure are considered advantageous, they are not mandatory requirements. Joining this role offers you the opportunity to work on high-performance systems with modern architecture, in a collaborative and growth-oriented environment. You will have access to cutting-edge tools, infrastructure, and learning resources, with prospects for long-term growth, upskilling, and mentorship. The role also promotes a healthy work-life balance with onsite amenities and team events.,
Posted 2 weeks ago
3.0 - 7.0 years
0 Lacs
karnataka
On-site
As an integral part of Omnissa, you will have the opportunity to contribute to the development and enhancement of Workspace ONE, an innovative digital workspace platform that ensures secure access to applications on various devices. Your role will involve designing and developing scalable software solutions for Unified Endpoint Management (UEM) Platform. You will play a crucial part in writing code, implementing new use cases, and enhancing the current system to cater to diverse platform businesses. In your journey at Omnissa, your success will be measured by your ability to produce high-quality software designs, execute them effectively, and continuously improve the product. You will be expected to collaborate with cross-functional teams, contribute to codebases, and identify opportunities for enhancing the scalability, usability, and supportability of the product. Additionally, you will work on a distributed application with an event-driven architecture, utilize technologies such as C#, .NET Framework, SQL/PostgreSQL/Open Search, Kafka/Redis/RabbitMQ for communications, and Asp.Net MVC along with Angular for front-end development. To excel in this role, you are required to possess a Bachelor's or Master's degree in Computer Science or a related field, along with proficiency in C# and .NET Framework. Your understanding of distributed systems, object-oriented design, and multi-threaded programming will be crucial for the role. Furthermore, your ability to troubleshoot, analyze logs, and ensure code quality through various testing methodologies will play a significant role in your success. You should exhibit a strong sense of ownership, prioritize security and compliance considerations, and have experience in large-scale enterprise technology deployments and cloud computing. Omnissa values diversity and inclusivity in its workforce, aiming to create an environment that fosters innovation and success. We are an Equal Opportunity Employer, committed to providing an equal platform for all individuals based on merit. If you are passionate about driving technological advancements, shaping the future of work, and contributing to a global team, we encourage you to join us on our journey towards creating impactful and secure digital workspaces. Location: Bengaluru Location Type: Hybrid/ONSITE,
Posted 2 weeks ago
5.0 - 9.0 years
0 Lacs
hyderabad, telangana
On-site
You will be responsible for designing architectures for meta-learning, self-reflective agents, and recursive optimization loops. Your role will involve building simulation frameworks for behavior grounded in Bayesian dynamics, attractor theory, and teleo-dynamics. Additionally, you will develop systems that integrate graph rewriting, knowledge representation, and neurosymbolic reasoning. Conducting research on fractal intelligence structures, swarm-based agent coordination, and autopoietic systems will be part of your responsibilities. You are expected to advance Mobius's knowledge graph with ontologies supporting logic, agency, and emergent semantics. Integration of logic into distributed, policy-scoped decision graphs aligned with business and ethical constraints is crucial. Furthermore, publishing cutting-edge results and mentoring contributors in reflective system design and emergent AI theory will be part of your duties. Lastly, building scalable simulations of multi-agent, goal-directed, and adaptive ecosystems within the Mobius runtime is an essential aspect of the role. In terms of qualifications, you should have proven expertise in meta-learning, recursive architectures, and AI safety. Proficiency in distributed systems, multi-agent environments, and decentralized coordination is necessary. Strong implementation skills in Python are required, with additional proficiency in C++, functional, or symbolic languages being a plus. A publication record in areas intersecting AI research, complexity science, and/or emergent systems is also desired. Preferred qualifications include experience with neurosymbolic architectures and hybrid AI systems, fractal modeling, attractor theory, complex adaptive dynamics, topos theory, category theory, logic-based semantics, knowledge ontologies, OWL/RDF, semantic reasoners, autopoiesis, teleo-dynamics, biologically inspired system design, swarm intelligence, self-organizing behavior, emergent coordination, and distributed learning systems. In terms of technical proficiency, you should be proficient in programming languages such as Python (required), C++, Haskell, Lisp, or Prolog (preferred for symbolic reasoning), frameworks like PyTorch and TensorFlow, distributed systems including Ray, Apache Spark, Dask, Kubernetes, knowledge technologies like Neo4j, RDF, OWL, SPARQL, experiment management tools like MLflow, Weights & Biases, and GPU and HPC systems like CUDA, NCCL, Slurm. Familiarity with formal modeling tools like Z3, TLA+, Coq, Isabelle is also beneficial. Your core research domains will include recursive self-improvement and introspective AI, graph theory, graph rewriting, and knowledge graphs, neurosymbolic systems and ontological reasoning, fractal intelligence and dynamic attractor-based learning, Bayesian reasoning under uncertainty and cognitive dynamics, swarm intelligence and decentralized consensus modeling, top os theory, and the abstract structure of logic spaces, autopoietic, self-sustaining system architectures, and teleo-dynamics and goal-driven adaptation in complex systems.,
Posted 2 weeks ago
4.0 - 8.0 years
0 Lacs
pune, maharashtra
On-site
As a Senior Scala Developer Akka Specialist at our Pune office, you will be responsible for leading the implementation of complex backend systems using the Akka toolkit and Scala. In this senior-level engineering position, you will play a crucial role in developing actor-based applications and APIs, defining best practices for reactive and distributed systems, and mentoring junior developers. Additionally, you will contribute to architectural decisions, conduct design and code reviews, monitor performance, conduct root cause analysis, and drive improvements. Key Responsibilities: - Lead the development of actor-based applications and APIs - Define best practices for reactive and distributed systems - Mentor a team of Scala developers - Conduct design and code reviews - Monitor performance, conduct root cause analysis, and drive improvements To excel in this role, you must have strong expertise in Scala and the Akka toolkit, along with experience in leading distributed backend teams. Deep knowledge of Akka Actors, Streams, and Clustering, as well as familiarity with design patterns and architecture principles, are essential. Knowledge of CQRS, Event Sourcing, experience with Akka Persistence, and Cluster Sharding are considered good to have. Perks of this role include leadership responsibility in a fast-moving tech environment, access to enterprise tools, architecture planning, and opportunities to shape tech decisions. If you are passionate about Scala, distributed systems, Akka toolkit, architecture, actor-based applications, design patterns, and architecture principles, we encourage you to apply for this exciting opportunity.,
Posted 2 weeks ago
6.0 - 10.0 years
0 Lacs
karnataka
On-site
As a Software Engineer, you will be responsible for designing, developing, and maintaining scalable backend services and workflow orchestration components using Python and GoLang. You will collaborate with the Airflow and Temporal team to build and optimize data pipelines and asynchronous job execution frameworks. Your role will involve implementing and managing complex workflow logic using Apache Airflow and Temporal to ensure high code quality through unit testing, integration testing, and code reviews. Additionally, you will work closely with cross-functional teams, including Data Engineering, DevOps, and Platform Engineering. Your contributions will include participating in architectural discussions and decision-making processes to ensure scalable and maintainable systems. It is essential to write clear documentation and actively participate in knowledge-sharing sessions. To excel in this role, you should possess at least 5-7 years of professional software engineering experience. Strong hands-on programming skills in Python and GoLang are required, along with a solid understanding of concurrent and distributed systems. Previous experience with Apache Airflow and/or Temporal.io is highly beneficial. You should also have expertise in designing and developing robust APIs and backend services while being familiar with containerization tools such as Docker and CI/CD practices. A good understanding of the software development lifecycle (SDLC) and Agile methodologies is necessary. Excellent problem-solving, communication, and collaboration skills are key to success in this position. It would be advantageous to have experience with cloud platforms like AWS, GCP, or Azure. Exposure to microservices architecture and event-driven systems, as well as familiarity with monitoring and observability tools, would be considered a plus.,
Posted 2 weeks ago
6.0 - 10.0 years
0 Lacs
chennai, tamil nadu
On-site
The Digital Software Engineer Senior Manager accomplishes results through the management of professional teams and departments. Integrating subject matter and industry expertise within a defined area, you contribute to standards around which others will operate. You must have an in-depth understanding of how areas collectively integrate within the sub-function, coordinate and contribute to the objectives of the entire function, along with basic commercial awareness. Developed communication and diplomacy skills are essential to guide, influence, and convince others, particularly colleagues in other areas and occasional external customers. You are responsible for the volume, quality, timeliness, and delivery of end results of an area. You may have to plan, budget, and formulate policies within your area of expertise. You will be involved in short-term planning resource planning. As a Senior Manager, you have full management responsibility for your team, which may include managing people, budget, and planning. This includes duties such as performance evaluation, compensation, hiring, disciplinary actions, terminations, and budget approval. Responsibilities: - Possess the ability to continuously build a network of talent inside and outside of the company. - Create mechanisms to help onboard new talent to the organization and mentor others effectively. - Coach and provide feedback to direct reports to help develop talent and support career development. - Apply performance standards and identify resource needs for the team to set and balance goals across the team for optimal performance against department goals and employee development. - Design, implement, and deploy software components to solve difficult problems, generating positive feedback. - Have a solid understanding of development approaches and know how to best use them. - Work independently and with your team to deliver software successfully. - Deliver work consistently of high quality, incorporating best practices that your team trusts. - Rapidly provide useful code reviews for changes submitted by others. - Focus on operational excellence by identifying problems and proposing solutions, taking on projects to improve your team's software and making it better and easier to maintain. - Make improvements to your team's development and testing processes. - Establish good working relationships with team-mates and peers working on related software. - Recognize discordant views and take part in constructive dialogue to resolve them. - Train new team-mates confidently about your customers, the team's software, how it is constructed, tested, operates, and how it fits into the bigger picture. - Assess risk appropriately when making business decisions, demonstrating particular consideration for the firm's reputation and safeguarding Citigroup, its clients, and assets, by driving compliance with applicable laws, rules, and regulations. Qualifications: - 6-10 years of relevant experience in an Apps Development role or senior level experience in an Enterprise Architecture role with subject matter expertise in one or more areas. - Exhibit expertise in all aspects of technology by understanding broader patterns and techniques as they apply to Citi's internal and external cloud platforms (AWS, PCF, Akamai). - Lead resources and serve as a functional Subject Matter Expert (SME) across the company through advanced knowledge of algorithms, data structures, distributed systems, networking, and drive broader adoption forward. - Acquire relevant technology and financial industry skills (AWS PWS) and understand all aspects of NGA technology, including innovative approaches and new opportunities. - Demonstrate knowledge of automating code quality, code performance, unit testing, and build processing in the CI/CD pipeline. Education: - Bachelors/University degree, Masters degree preferred,
Posted 2 weeks ago
7.0 - 11.0 years
0 Lacs
hyderabad, telangana
On-site
You will be joining OSP India - Hyderabad Private Limited, now a part of Otto Group one.O, a high-performance partner focused on strategy consulting and technology for the Otto Group. This transition aligns the company with global teams in Germany, Spain, and Taiwan, enhancing collaboration moving forward. Your role and job security will remain unaffected by this rebranding, ensuring continuity with the company culture. As a Fullstack Developer with 7-10 years of professional experience, you will be responsible for building scalable, high-quality software solutions. You must showcase proficiency in both frontend and backend technologies, with a track record of utilizing AI-assisted coding tools and low-code platforms to expedite development without compromising quality. Your role will involve developing prototypes, proof-of-concepts, and production-grade applications, leveraging fullstack technologies and AI coding tools. Collaboration with cross-functional teams, integration of APIs and third-party services, and maintenance of code quality will be key aspects of your responsibilities. To excel in this role, you must possess strong hands-on experience with frontend frameworks like React, Angular, or Vue.js, and backend development skills in Node.js, Python, Java, or C#. Experience with AI coding tools such as GitHub Copilot, ChatGPT, AskCodi, and exposure to low-code/no-code platforms like Microsoft Power Apps, Bubble, Retool, or similar tools are essential. Your ability to assess new technologies critically, problem-solving skills, and effective communication with both technical and non-technical stakeholders will be crucial. Mentoring junior developers and fostering a culture of continuous learning are valued traits for this role. While experience with DevOps, cloud platforms (AWS, Azure, GCP), design systems, and Agile/Scrum environments are advantageous, they are not mandatory requirements. The company offers benefits such as flexible working hours, comprehensive medical insurance, and a hybrid work model combining in-office collaboration and remote work opportunities to support work-life balance and employee well-being.,
Posted 2 weeks ago
3.0 - 7.0 years
0 Lacs
pune, maharashtra
On-site
Job Description As a Big Data Engineer with Capco, a Wipro company, you will play a crucial role in leveraging your skills to drive innovative solutions for our clients in the banking, financial, and energy sectors. Your expertise in messaging technologies such as Apache Kafka, programming languages like Scala and Python, and tools like NiFi and AirFlow will be essential in designing and implementing intuitive and responsive user interfaces that enhance data analysis capabilities. You will be responsible for writing efficient queries using Jupyter Notebook, optimizing Spark performance, and ensuring the reliability and scalability of distributed systems. Your strong understanding of cloud architecture, SQL, and software engineering concepts will enable you to deliver high-quality code that meets performance standards. At Capco, we value diversity, inclusivity, and creativity, and believe that different perspectives contribute to our competitive advantage. With no forced hierarchy, you will have the opportunity to advance your career and make a significant impact on our clients" businesses. Join us at Capco and be part of a dynamic team that is driving transformation in the energy and financial services industries.,
Posted 2 weeks ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
39581 Jobs | Dublin
Wipro
19070 Jobs | Bengaluru
Accenture in India
14409 Jobs | Dublin 2
EY
14248 Jobs | London
Uplers
10536 Jobs | Ahmedabad
Amazon
10262 Jobs | Seattle,WA
IBM
9120 Jobs | Armonk
Oracle
8925 Jobs | Redwood City
Capgemini
7500 Jobs | Paris,France
Virtusa
7132 Jobs | Southborough