Jobs
Interviews

2613 Elasticsearch Jobs - Page 32

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

3.0 - 7.0 years

0 Lacs

karnataka

On-site

As a senior data engineer, you will be responsible for working on complex data pipelines dealing with petabytes of data. The Balbix platform serves as a critical security tool for CIOs, CISOs, and sec-ops teams of small, medium, and large enterprises globally, including Fortune 10 companies. Your role will involve solving challenges related to massive cybersecurity and IT data sets by collaborating closely with data scientists, threat researchers, and network experts to address real-world cybersecurity issues. To excel in this role, you must possess excellent algorithm, programming, and testing skills gained from experience in large-scale data engineering projects. Your primary responsibilities will include designing and implementing features, along with taking ownership of modules for ingesting, storing, and manipulating large data sets to cater to various cybersecurity use-cases. You will also be tasked with writing code to provide backend support for data-driven UI widgets, web dashboards, workflows, search functionalities, and API connectors. Additionally, designing and implementing web services, REST APIs, and microservices will be part of your routine tasks. Your aim should be to build high-quality solutions that strike a balance between complexity and meeting functional requirements" acceptance criteria. Collaboration with multiple teams, including ML, UI, backend, and data engineering, will also be essential for success in this role. To thrive in this position, you should be driven to seek new experiences, learn about design and architecture, and be open to taking on progressive roles within the organization. Your ability to collaborate effectively across teams, such as data engineering, front end, product management, and DevOps, will be crucial. Being responsible and willing to take ownership of challenging problems is a key trait expected from you. Strong communication skills, encompassing good documentation practices and the ability to articulate thought processes in a team setting, will be essential. Moreover, you should feel comfortable working in an agile environment and exhibit curiosity about technology and the industry, demonstrating a willingness to continuously learn and grow. Qualifications for this role include a MS/BS degree in Computer Science or a related field with a minimum of three years of experience. You should possess expert programming skills in Python, Java, or Scala, along with a good working knowledge of SQL databases like Postgres and NoSQL databases such as MongoDB, Cassandra, and Redis. Experience with search engine databases like ElasticSearch is preferred, as well as familiarity with time-series databases like InfluxDB, Druid, and Prometheus. Strong fundamentals in computer science, including data structures, algorithms, and distributed systems, will be advantageous for fulfilling the requirements of this role.,

Posted 3 weeks ago

Apply

2.0 - 6.0 years

0 Lacs

pune, maharashtra

On-site

The Applications Development Intermediate Programmer Analyst position is an intermediate level role where you will be responsible for participating in the establishment and implementation of new or revised application systems and programs in coordination with the Technology team. Your main objective will be to contribute to applications systems analysis and programming activities. You will be required to utilize your knowledge of applications development procedures and concepts, as well as basic knowledge of other technical areas, to identify and define necessary system enhancements. This includes using script tools, analyzing/interpreting code, consulting with users, clients, and other technology groups on issues, and recommending programming solutions. Additionally, you will be responsible for installing and supporting customer exposure systems and applying fundamental knowledge of programming languages for design specifications. As an Intermediate Programmer Analyst, you will analyze applications to identify vulnerabilities and security issues, conduct testing and debugging, and serve as an advisor or coach to new or lower-level analysts. Your role will involve identifying problems, analyzing information, and making evaluative judgments to recommend and implement solutions. You should be able to resolve issues independently by selecting solutions based on acquired technical experience and precedents. Furthermore, you are expected to operate with a limited level of direct supervision, exercise independence of judgment and autonomy, and act as a subject matter expert to senior stakeholders and/or other team members. It is essential to appropriately assess risk when making business decisions, ensuring compliance with applicable laws, rules, and regulations while safeguarding Citigroup, its clients, and assets. Qualifications: - 2-5 years of relevant experience in the Financial Service industry - Intermediate level experience in Applications Development role - Clear and concise written and verbal communication skills - Strong problem-solving and decision-making abilities - Ability to work under pressure, manage deadlines, and adapt to unexpected changes in expectations or requirements - Proficiency in Angular 11+, Typescript, JavaScript, HTML, CSS, Java/J2EE, and Microservices. Good to have knowledge of Kafka, Elasticsearch, SQL - Experience as a hands-on UI engineer Education: - Bachelors degree/University degree or equivalent experience Please note that this job description offers a high-level overview of the work performed. Other job-related duties may be assigned as required. Citi is an equal opportunity and affirmative action employer, encouraging all qualified interested applicants to apply for career opportunities. If you require a reasonable accommodation due to a disability, you may review Accessibility at Citi for assistance.,

Posted 3 weeks ago

Apply

3.0 - 7.0 years

0 Lacs

maharashtra

On-site

We are seeking a Senior Developer to become a valuable member of our team and enhance the overall user experience. As a Senior Developer, you will leverage your expertise in front-end development to contribute to the creation of a top-notch codebase that adheres to industry best practices. This role involves tackling intricate technical challenges within the financial sector, collaborating closely with a skilled engineering team, and playing a vital role in project development. The ideal candidate will demonstrate strong problem-solving abilities and a commitment to excellence. Responsibilities include: - Leading the development team in designing, coding, testing, and debugging applications. - Demonstrating proficiency in HTML, CSS, SCSS, JavaScript, and crafting cross-browser compatible code. - Actively participating in the development and design of new product features. - Experience with developing high-volume/data-driven consumer web applications. - Conducting software component testing to ensure optimal responsiveness and efficiency. - Possessing at least 3 years of experience with Angular 2+ or React, as well as NodeJS and REST API. - Familiarity with SQL, NoSQL, Redis, and Elasticsearch. - Writing application code and conducting unit testing in Angular and Rest Web Services. - Highly desirable experience in TypeScript. - Exposure to AWS is advantageous, with a willingness to learn new technologies. - Previous software development experience, particularly in a product development setting. - Upholding a client-first and team-first mindset. - Holding a Bachelor's or Master's degree in Computer Science, Financial Engineering, or Information Technology from a reputable institution. - A keen interest in and knowledge of financial markets. - Bonus points for experience in developing applications such as chat, note-taking, or document search. Benefits of joining our team: - A dynamic environment that fosters growth and advancement opportunities. - Collaborative culture comprising intelligent, passionate individuals. - State-of-the-art WeWork office space with amenities like on-tap cappuccino, ping pong tables, and an invigorating atmosphere. - Additional perks and employee benefits. Join us in delivering innovative solutions and shaping the future of technology in the financial industry.,

Posted 3 weeks ago

Apply

8.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Location: Pune (Work From Office) Experience Required: 4–8 Years Notice Period: Immediate to 15 Days Only Employment Type: Contract About The Role We’re hiring a Principal Scala Developer with strong expertise in Akka or LAGOM , and practical experience building real-time, distributed systems . This role demands a deep understanding of microservices architecture , containerized environments, and tools like Apache Pulsar , ElasticSearch , and Kubernetes . You'll work on building scalable backend systems that power data-intensive applications, collaborating with a team that values high performance, innovation, and clean code. Key Responsibilities Develop and maintain scalable microservices using Scala, Akka, and/or LAGOM. Build containerized applications using Docker and orchestrate them with Kubernetes (K8s). Manage real-time messaging with Apache Pulsar. Integrate with databases using the Slick Connector and PostgreSQL. Enable search and analytics features using ElasticSearch. Work with GitLab CI/CD pipelines to streamline deployment workflows. Collaborate across teams and write clean, well-structured, and maintainable code. Must-Have Skills 4–8 years of development experience with Scala. Expertise in Akka or LAGOM frameworks. Strong knowledge of microservice architecture and distributed systems. Proficiency with Docker and Kubernetes. Hands-on experience with Apache Pulsar, PostgreSQL, and ElasticSearch. Familiarity with GitLab, CI/CD pipelines, and deployment processes. Strong software engineering and documentation skills. Good to Have Experience with Kafka or RabbitMQ. Exposure to monitoring and logging tools (Prometheus, Grafana, ELK stack). Basic understanding of frontend frameworks like React or Angular. Familiarity with cloud platforms (AWS, GCP, or Azure). Prior experience in domains such as finance, logistics, or real-time data processing. Educational Background Bachelor’s or Master’s degree in Computer Science, Engineering, or a related technical field. Why Join Us Work on real-world, high-performance systems with modern architecture. Be part of a collaborative, growth-oriented environment. Access to cutting-edge tools, infrastructure, and learning resources. Opportunities for long-term growth, upskilling, and mentorship. Enjoy a healthy work-life balance with onsite amenities and team event Skills: akka,postgresql,scala,elasticsearch,microservices architecture,lagom,apache pulsar,distributed systems,docker,kubernetes,ci/cd,gitlab

Posted 3 weeks ago

Apply

8.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Location: Pune (Work From Office) Experience Required: 4–8 Years Notice Period: Immediate to 15 Days Only Employment Type: Contract About The Role We’re hiring a Scala Developer - Event Streaming & Microservices with strong expertise in Akka or LAGOM , and practical experience building real-time, distributed systems . This role demands a deep understanding of microservices architecture , containerized environments, and tools like Apache Pulsar , ElasticSearch , and Kubernetes . You'll work on building scalable backend systems that power data-intensive applications, collaborating with a team that values high performance, innovation, and clean code. Key Responsibilities Develop and maintain scalable microservices using Scala, Akka, and/or LAGOM. Build containerized applications using Docker and orchestrate them with Kubernetes (K8s). Manage real-time messaging with Apache Pulsar. Integrate with databases using the Slick Connector and PostgreSQL. Enable search and analytics features using ElasticSearch. Work with GitLab CI/CD pipelines to streamline deployment workflows. Collaborate across teams and write clean, well-structured, and maintainable code. Must-Have Skills 4–8 years of development experience with Scala. Expertise in Akka or LAGOM frameworks. Strong knowledge of microservice architecture and distributed systems. Proficiency with Docker and Kubernetes. Hands-on experience with Apache Pulsar, PostgreSQL, and ElasticSearch. Familiarity with GitLab, CI/CD pipelines, and deployment processes. Strong software engineering and documentation skills. Good to Have Experience with Kafka or RabbitMQ. Exposure to monitoring and logging tools (Prometheus, Grafana, ELK stack). Basic understanding of frontend frameworks like React or Angular. Familiarity with cloud platforms (AWS, GCP, or Azure). Prior experience in domains such as finance, logistics, or real-time data processing. Educational Background Bachelor’s or Master’s degree in Computer Science, Engineering, or a related technical field. Why Join Us Work on real-world, high-performance systems with modern architecture. Be part of a collaborative, growth-oriented environment. Access to cutting-edge tools, infrastructure, and learning resources. Opportunities for long-term growth, upskilling, and mentorship. Enjoy a healthy work-life balance with onsite amenities and team events. Skills: akka,gitlab,distributed systems,lagom,ci/cd,microservices,docker,elasticsearch,microservices architecture,apache pulsar,scala,kubernetes,postgresql

Posted 3 weeks ago

Apply

8.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Location: Pune (Work From Office) Experience Required: 4–8 Years Notice Period: Immediate to 15 Days Only Employment Type: Contract About The Role We’re hiring a Scala & Relative Systems Engineer with strong expertise in Akka or LAGOM , and practical experience building real-time, distributed systems . This role demands a deep understanding of microservices architecture , containerized environments, and tools like Apache Pulsar , ElasticSearch , and Kubernetes . You'll work on building scalable backend systems that power data-intensive applications, collaborating with a team that values high performance, innovation, and clean code. Key Responsibilities Develop and maintain scalable microservices using Scala, Akka, and/or LAGOM. Build containerized applications using Docker and orchestrate them with Kubernetes (K8s). Manage real-time messaging with Apache Pulsar. Integrate with databases using the Slick Connector and PostgreSQL. Enable search and analytics features using ElasticSearch. Work with GitLab CI/CD pipelines to streamline deployment workflows. Collaborate across teams and write clean, well-structured, and maintainable code. Must-Have Skills 4–8 years of development experience with Scala. Expertise in Akka or LAGOM frameworks. Strong knowledge of microservice architecture and distributed systems. Proficiency with Docker and Kubernetes. Hands-on experience with Apache Pulsar, PostgreSQL, and ElasticSearch. Familiarity with GitLab, CI/CD pipelines, and deployment processes. Strong software engineering and documentation skills. Good to Have Experience with Kafka or RabbitMQ. Exposure to monitoring and logging tools (Prometheus, Grafana, ELK stack). Basic understanding of frontend frameworks like React or Angular. Familiarity with cloud platforms (AWS, GCP, or Azure). Prior experience in domains such as finance, logistics, or real-time data processing. Educational Background Bachelor’s or Master’s degree in Computer Science, Engineering, or a related technical field. Why Join Us Work on real-world, high-performance systems with modern architecture. Be part of a collaborative, growth-oriented environment. Access to cutting-edge tools, infrastructure, and learning resources. Opportunities for long-term growth, upskilling, and mentorship. Enjoy a healthy work-life balance with onsite amenities and team events. Skills: akka,gitlab,distributed systems,lagom,docker,microservices architecture,elasticsearch,kubernetes,apache,scala,ci/cd pipelines,apache pulsar,postgresql

Posted 3 weeks ago

Apply

2.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Job Description At Publicis Sapient, we re at the forefront of revolutionizing the future of product engineering with state-of-the-art, scalable innovations. If you re Associate Software Development Engineer seeking your next transformative challenge, we have an incredible opportunity for you: Our team utilizes advanced artificial intelligence and machine learning methodologies to design and implement intelligent, adaptive solutions that tackle complex real-world challenges. Your Impact You will work in the spirit of agile & a product engineering mindset - delivering the sprint outcomes, iteratively & incrementally, following the agile ceremonies You’re expected to write clean, modular, production ready code and take it through production and post-production lifecycle. You will groom the stories functionally & help define the acceptance criteria (Functional & Non-Functional/NFRs) You will have breadth of concepts, tools & technologies to address NFRs like security, performance, reliability, maintainability and understand the need for trade-offs You will bring in expertise to optimize and make the relevant design decisions (considering trade-offs) at the module / components level Manage the product lifecycle from requirements gathering and feasibility analysis through high-level and low-level design, development, user acceptance testing (UAT), and staging deployment. Qualifications Your Skills & Experience: You have professional work experience of 2+ years building large scale, large volume services & distributed apps., taking them through production and post-production life cycles You use more than one programming language with expertise in at least one; Ex: Memory Management, GC, Templates/Generics, Closures etc. Multi-Threading, Sync/A-Sync.; Blocking/Non-Blocking execution styles You practice Imperative, Functional Programming styles You are aware of Cloud Platform like AWS, GCP, Azure etc. You are a problem solver choosing the relevant data structures, algorithms considering the tools for Time & Space Complexity You apply SOLID, DRY design principles, design patterns & practice Clean Code You are an expert at String Manipulation, Data/Time Arithmetic, Collections & Generics You practice & guide on handling failures à Error Management & Exception handling You build reliable & high-performance apps leveraging Eventing, Streaming, Concurrency, Multi-Threading & Synchronization libraries and frameworks You develop web apps using HTML, CSS, Java-script & relevant frameworks (Angular, React, Vue) You design and build microservices from grounds up, considering all NFRs & applying DDD, Bounded Contexts You use one or more databases (RDBMS or NoSQL) based on your needs You deploy production, trouble shoot problems & provide live support You understand the significance of security aspects & compliance to data, code & application security policies; You write secure code to prevent known vulnerabilities; You understand HTTPS/TLS, Symmetric/Asymmetric Cryptography, Certificates You use one or more Web Application Frameworks Spring or Spring Boot or Micronaut (Java) Flask or Django (Python) Express or Meteor or Koa (Node) Asp.net MVC, WebApi or Nancy (.Net) You use one or more messaging platforms (e.g. JMS/RabbitMQ/Kafka/Tibco/Camel) You use Mocks & Stubs & related frameworks (Moq) You use logging frameworks like Log4j, NLog etc. You use build tools like MsBuild, Maven, Gradle, Gulp etc. You understand and use containers, virtualization You use proactive monitoring & alerting, dashboards You use Logging/Monitoring solutions (Splunk, ELK, Grafana) Additional Information Set Yourself Apart With You understand infra. as code (cattle over pets) You understand reactive programming concepts, Actor models & use RX Java / Spring React / Akka / Play etc. You are able to set-up a CI/CD pipeline infrastructure & stack from grounds-up You are able to articulate the pro’s, con’s of designs & tradeoffs You are aware of distributed tracing, debugging and troubleshooting You are aware of side-car, service mesh usage along with microservices You are aware of distributed, cloud design patterns & architectural styles You are aware of gateways, load-balancers, CDNs, Edge caching You are aware of gherkin and cucumber for BDD automation You are aware of performance testing tools like JMeter, Gatling You are aware of one search solution like Elasticsearch, SOLR, Endeca You are aware of one distributed caching solution like Redis, Memcached etc. You are aware of a Rules engine like Drools, Easy Rules etc. Benefits Of Working Here Gender Neutral Policy 18 paid holidays throughout the year. Generous parental leave and new parent transition program Flexible work arrangements Employee Assistance Programs to help you in wellness and well-being. A Tip From The Hiring Manager Software Development Engineers (ASDE-2) are bright, talented, and motivated young minds with strong technical skills, developing software applications and services that make life easier for customers. The ASDE-2 is expected to work with an agile team to develop, test, and maintain digital business applications. Company Description Publicis Sapient is a digital transformation partner helping established organizations get to their future, digitally-enabled state, both in the way they work and the way they serve their customers. We help unlock value through a start-up mindset and modern methods, fusing strategy, consulting, and customer experience with agile engineering and problem-solving creativity. United by our core values and our purpose of helping people thrive in the brave pursuit of the next, our 20,000+ people in 53 offices around the world combine experience across technology, data sciences, consulting and customer obsession to accelerate our clients’ businesses through designing the products and services their customers truly value.

Posted 3 weeks ago

Apply

8.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Company Description Arista Networks is an industry leader in data-driven, client-to-cloud networking for large data center, campus and routing environments. Arista is a well-established and profitable company with over $8 billion in revenue. Arista’s award-winning platforms, ranging in Ethernet speeds up to 800G bits per second, redefine scalability, agility, and resilience. Arista is a founding member of the Ultra Ethernet consortium. We have shipped over 20 million cloud networking ports worldwide with CloudVision and EOS, an advanced network operating system. Arista is committed to open standards, and its products are available worldwide directly and through partners. At Arista, we value the diversity of thought and perspectives each employee brings. We believe fostering an inclusive environment where individuals from various backgrounds and experiences feel welcome is essential for driving creativity and innovation. Our commitment to excellence has earned us several prestigious awards, such as the Great Place to Work Survey for Best Engineering Team and Best Company for Diversity, Compensation, and Work-Life Balance. At Arista, we take pride in our track record of success and strive to maintain the highest quality and performance standards in everything we do. Job Description Who You’ll Work With CloudVision is Arista’s enterprise network management and streaming telemetry SaaS offering, serving the world’s largest Financials, Media and Entertainment, Health Care, and Cloud companies. As we continue to scale the service and expand into new markets, we’re looking to grow the team with experienced Software Engineers anchored by our Bangalore and Pune team. CloudVision’s core infrastructure is a scale-out distributed system providing real-time and historical access to the full network state, along with frameworks for building advanced analytics. It’s written in go and leverages open source technologies like HBase, ClickHouse, ElasticSearch, and Kafka under the covers. We’re constantly investing in scaling out the platform and building out richer analytics capabilities in the infrastructure. On top of this core platform we are building network management and analytics applications to fully automate today’s enterprise network, from CI/CD pipelines for network automation, to advanced analytics and remediation for network assurance. What You’ll Do As a backend software engineer at Arista, you own your project end to end. You and your project team will work with product management and customers to define the requirements and design the architecture. You’ll build the backend, write automated tests, and get it deployed into production via our CD pipeline. As a senior member of the team you’ll also be expected to help mentor and grow new team members. This role demands a strong and broad software engineering background, and you won’t be limited to any single aspect of the product or development process. Qualifications BS/MS degree in Computer Science and 8+ years of relevant experience. Strong knowledge of one or more of programming languages (Go, Python, Java) Experience developing distributed systems or scale out applications for a SaaS environment Experience developing scalable backend systems in Go is a plus Experience with network monitoring, network protocols, machine learning or data analytics is a Additional Information Arista stands out as an engineering-centric company. Our leadership, including founders and engineering managers, are all engineers who understand sound software engineering principles and the importance of doing things right. We hire globally into our diverse team. At Arista, engineers have complete ownership of their projects. Our management structure is flat and streamlined, and software engineering is led by those who understand it best. We prioritize the development and utilization of test automation tools. Our engineers have access to every part of the company, providing opportunities to work across various domains. Arista is headquartered in Santa Clara, California, with development offices in Australia, Canada, India, Ireland, and the US. We consider all our R&D centers equal in stature. Join us to shape the future of networking and be part of a culture that values invention, quality, respect, and fun.

Posted 3 weeks ago

Apply

10.0 years

40 - 50 Lacs

Pune, Maharashtra, India

On-site

About Company With a focus on Identity and Access Management (IAM) and Customer Identity and Access Management (CIAM), we offer cutting-edge solutions to secure your workforce, customers, and partners. Our expertise also includes offering new-age security solutions for popular CMS and project management platforms like Atlassian, WordPress, Joomla, Drupal, Shopify, BigCommerce, and Magento. Our solutions are specific, accurate, and, most importantly, great at doing what they’re supposed to: making you more secure! Position Details We are looking for a talented and experienced AI/ML Engineer to join our growing team and contribute to the development of cutting-edge AI-powered products and solutions. The ideal candidate will have 10+ years of hands-on experience in developing and deploying advanced AI and ML models and related software systems. Status: Full Time, Employee Experience: 10+ Years Qualifications: Bachelor's or Master's Degree in Computer Science, Data Science, Computational Linguistics, Natural Processing (NLP), or other related fields. Location: Baner, Pune Roles & Responsibilities Develop machine learning and deep learning models and algorithms to solve complex business problems, improve processes, and enhance product functionality. Develop and deploy personalized large language models (LLMs). Develop document parsing, named entity recognition (NER), retrieval-augmented generation (RAG), and chatbot systems. Build robust data and ML pipelines for production scale and performance. Optimize and fine-tune machine learning models for performance, scalability, and accuracy, leveraging techniques such as hyperparameter tuning and model optimization. Write robust, production-quality code using frameworks like PyTorch or TensorFlow. Stay updated on the latest advancements in AI/ML technologies, tools, and methodologies, incorporating best practices into development processes. Collaborate with stakeholders to understand business requirements, define project objectives, and deliver AI/ML solutions that meet customer needs and drive business value. Requirements Bachelor's or Master's Degree in Computer Science, Data Science, Computational Linguistics, Natural Processing (NLP), or other related fields. 10+ years of experience in developing and deploying machine learning models and algorithms, with hands-on experience with AI/ML frameworks (e.g., TensorFlow, PyTorch, scikit-learn). Experience with Python, spaCy, NLTK, and knowledge graphs. Experience with search, particularly information retrieval, Elasticsearch, and relevance. Experience working with and fine-tuning existing models, especially from Hugging Face. Strong programming skills in languages such as Python, Java, or C++ for AI/ML model development and integration. Familiarity with web frameworks (FastAPI, Flask, Django, etc.) for building APIs. Knowledge of agentic AI frameworks like LangChain, LangGraph, AutoGen, or Crew AI. Outstanding knowledge and experience in Data Science and MLOps in the fields of ML/DL, Generative AI with experience in containerization (Docker) and orchestration (Kubernetes) for deployment. Familiarity with cloud platforms (e.g., AWS, Azure, GCP) and AI/ML deployment tools for scalable and reliable model deployment. Strong communication and collaboration skills, with the ability to work independently and collaboratively in a dynamic environment. What We Offer You A constant stream of new things for you to learn. We're always expanding into new areas and exploring new technologies. A set of extraordinarily talented and dedicated peers. A stable, collaborative, and supportive work environment. Skills: ai/ml technologies,deep learning,ai,kubernetes,neuro-linguistic programming (nlp),scikit-learn,tensorflow,c++,langchain,elasticsearch,python,autogen,nltk,deep learning models,crew ai,flask,chatbot,ner,ml,ml pipelines,nlp,mlops,orchestration,machine learning algorithms,generative ai,data science,containerization,natural language processing,fastapi,ai/ml,django,azure,gcp,machine learning models,apis,spacy,algorithms,llm,docker,agentic ai,large language models,pytorch,artificial intelligence,rag,java,aws,machine learning,ml/dl

Posted 3 weeks ago

Apply

0 years

0 Lacs

Chennai, Tamil Nadu, India

Remote

About YipitData: YipitData is the leading market research and analytics firm for the disruptive economy and recently raised up to $475M from The Carlyle Group at a valuation over $1B. We analyze billions of alternative data points every day to provide accurate, detailed insights on ridesharing, e-commerce marketplaces, payments, and more. Our on-demand insights team uses proprietary technology to identify, license, clean, and analyze the data many of the world's largest investment funds and corporations depend on. For three years and counting, we have been recognized as one of Inc's Best Workplaces . We are a fast-growing technology company backed by The Carlyle Group and Norwest Venture Partners. Our offices are located in NYC, Austin, Miami, Denver, Mountain View, Seattle , Hong Kong, Shanghai, Beijing, Guangzhou, and Singapore. We cultivate a people-centric culture focused on mastery, ownership, and transparency. We are hiring 2 senior software engineers (alternative title: senior backend engineer). One will join our Infrastructure Team, and the other will join our Data Feeds Team. As Our Senior Software Engineer in the Infrastructure Team, You Will be Responsible for: ERP Data Connector Specialist : You'll be the one implementing connectors to fetch ERP data. It's like building a bridge to a treasure trove of information. And not just that, you've got to ensure this data has high availability. So, whether it's the crack of dawn or the dead of night, the ERP data should be accessible without a hitch. Backend API Master : Customers' requirements are constantly evolving, and that's where you come in. You'll be implementing or upgrading backend APIs to fit these new demands like a custom-made glove. Your work will be the driving force behind our ability to keep up with the market. ERP Data Storage Maestro : Take charge of the ERP system's data storage. It's your kingdom, and you're responsible for all related improvements. Make sure the data is stored efficiently and securely, and is always ready for action. Business-Tech Liaison : Understand the business requirements inside out. Jump right into discussions with the team and bring your A-game to design technical solutions. You'll be the one who bridges the gap between business needs and technical implementation. Service Maintenance and Upgrade Champion : Maintain our existing services like a pro. Dive into iterative upgrades, deploy improvements, and take charge of service governance. Your work will be the glue that holds our services together and keeps them evolving. Global Team Player : Work hand-in-glove with our US/SG/China teams. Be flexible with work hours as the data world never sleeps, and we need you to be on top of your game, always! As Our Senior Software Engineer in the Data Feeds Team, You Will be Responsible for: Data Pipelines Maestro : You'll be responsible for developing, optimizing, and maintaining super-scalable data pipelines. Whether it's structured data flowing like a well-oiled stream or unstructured data that needs taming, you've got to make sure these pipelines are top-notch for seamless processing. Data Systems Guardian : Existing data systems and services. You bet you'll be the one maintaining and enhancing their stability, reliability, and ensuring high availability. Think of yourself as the shield that keeps our data world running smoothly. Data Architecture Builder : Team up with our amazing crew to construct and refine an expandable, high-performance data architecture. It's like building a digital skyscraper, but with data blocks! Data Services Provider : Partner with different teams across the board. Your mission is to serve up high-quality, rock-solid data services to all our internal users. They'll be relying on you like a lifeline for their data needs. Business-Aligned Data Designer : Get the hang of product and business requirements. Then, design and implement data functionalities that are not only useful but also come with super- intuitive data visualizations. Make data come alive! Third-Party Data Integrator : Oversee the integration and maintenance of collaborative data with our third-party clients. Solve their data mining and analytical headaches, and you'll be the hero of the data realm. Global Team Player : Work hand-in-glove with our US/SG/China teams. Be flexible with work hours as the data world never sleeps, and we need you to be on top of your game, always! Data Governance Enforcer : Enforce the best practices in data governance, security, and compliance. You're the gatekeeper of sensitive information, and you've got to keep it safe at all costs. This is a fully-remote opportunity based in India. Standard work hours are from 8 am to 5 pm IST. You Are Likely To Succeed If you have: Bachelor's degree in Computer Science, or related majors, 5+ yrs backend experience. Solid computer foundation and programming skills, familiar with common data structures and algorithms. Excellent in one of the following languages: Go/Python Familiarity with one of open source components: Mysql/Redis/Message Queue/Nosql. Familiarity with ElasticSearch OR Spark (for Data Feeds Team) Experience in architecture and developing large-scale distributed systems. (for Infrastructure Team) Excellent logic analysis capabilities, able to abstract and split business logic reasonably. Exposure to cloud infrastructure, such as kubernates/docker, Azure/AWS/GCP. Familiarity with ERP systems. What We Offer: Our compensation package includes comprehensive benefits, perks, and a competitive salary: We care about your personal life and we mean it. We offer vacation time, parental leave, team events, learning reimbursement, and more! Your growth at YipitData is determined by the impact that you are making, not by tenure, unnecessary facetime, or office politics. Everyone at YipitData is empowered to learn, self-improve, and master their skills in an environment focused on ownership, respect, and trust. We are committed to equal employment opportunity regardless of race, color, ancestry, religion, sex, national origin, sexual orientation, age, marital status, disability, gender, gender identity or expression, or veteran status. We are proud to be an equal-opportunity employer. Job Applicant Privacy Notice

Posted 3 weeks ago

Apply

0 years

0 Lacs

Greater Kolkata Area

Remote

About YipitData: YipitData is the leading market research and analytics firm for the disruptive economy and recently raised up to $475M from The Carlyle Group at a valuation over $1B. We analyze billions of alternative data points every day to provide accurate, detailed insights on ridesharing, e-commerce marketplaces, payments, and more. Our on-demand insights team uses proprietary technology to identify, license, clean, and analyze the data many of the world's largest investment funds and corporations depend on. For three years and counting, we have been recognized as one of Inc's Best Workplaces . We are a fast-growing technology company backed by The Carlyle Group and Norwest Venture Partners. Our offices are located in NYC, Austin, Miami, Denver, Mountain View, Seattle , Hong Kong, Shanghai, Beijing, Guangzhou, and Singapore. We cultivate a people-centric culture focused on mastery, ownership, and transparency. We are hiring 2 senior software engineers (alternative title: senior backend engineer). One will join our Infrastructure Team, and the other will join our Data Feeds Team. As Our Senior Software Engineer in the Infrastructure Team, You Will be Responsible for: ERP Data Connector Specialist : You'll be the one implementing connectors to fetch ERP data. It's like building a bridge to a treasure trove of information. And not just that, you've got to ensure this data has high availability. So, whether it's the crack of dawn or the dead of night, the ERP data should be accessible without a hitch. Backend API Master : Customers' requirements are constantly evolving, and that's where you come in. You'll be implementing or upgrading backend APIs to fit these new demands like a custom-made glove. Your work will be the driving force behind our ability to keep up with the market. ERP Data Storage Maestro : Take charge of the ERP system's data storage. It's your kingdom, and you're responsible for all related improvements. Make sure the data is stored efficiently and securely, and is always ready for action. Business-Tech Liaison : Understand the business requirements inside out. Jump right into discussions with the team and bring your A-game to design technical solutions. You'll be the one who bridges the gap between business needs and technical implementation. Service Maintenance and Upgrade Champion : Maintain our existing services like a pro. Dive into iterative upgrades, deploy improvements, and take charge of service governance. Your work will be the glue that holds our services together and keeps them evolving. Global Team Player : Work hand-in-glove with our US/SG/China teams. Be flexible with work hours as the data world never sleeps, and we need you to be on top of your game, always! As Our Senior Software Engineer in the Data Feeds Team, You Will be Responsible for: Data Pipelines Maestro : You'll be responsible for developing, optimizing, and maintaining super-scalable data pipelines. Whether it's structured data flowing like a well-oiled stream or unstructured data that needs taming, you've got to make sure these pipelines are top-notch for seamless processing. Data Systems Guardian : Existing data systems and services. You bet you'll be the one maintaining and enhancing their stability, reliability, and ensuring high availability. Think of yourself as the shield that keeps our data world running smoothly. Data Architecture Builder : Team up with our amazing crew to construct and refine an expandable, high-performance data architecture. It's like building a digital skyscraper, but with data blocks! Data Services Provider : Partner with different teams across the board. Your mission is to serve up high-quality, rock-solid data services to all our internal users. They'll be relying on you like a lifeline for their data needs. Business-Aligned Data Designer : Get the hang of product and business requirements. Then, design and implement data functionalities that are not only useful but also come with super- intuitive data visualizations. Make data come alive! Third-Party Data Integrator : Oversee the integration and maintenance of collaborative data with our third-party clients. Solve their data mining and analytical headaches, and you'll be the hero of the data realm. Global Team Player : Work hand-in-glove with our US/SG/China teams. Be flexible with work hours as the data world never sleeps, and we need you to be on top of your game, always! Data Governance Enforcer : Enforce the best practices in data governance, security, and compliance. You're the gatekeeper of sensitive information, and you've got to keep it safe at all costs. This is a fully-remote opportunity based in India. Standard work hours are from 8 am to 5 pm IST. You Are Likely To Succeed If you have: Bachelor's degree in Computer Science, or related majors, 5+ yrs backend experience. Solid computer foundation and programming skills, familiar with common data structures and algorithms. Excellent in one of the following languages: Go/Python Familiarity with one of open source components: Mysql/Redis/Message Queue/Nosql. Familiarity with ElasticSearch OR Spark (for Data Feeds Team) Experience in architecture and developing large-scale distributed systems. (for Infrastructure Team) Excellent logic analysis capabilities, able to abstract and split business logic reasonably. Exposure to cloud infrastructure, such as kubernates/docker, Azure/AWS/GCP. Familiarity with ERP systems. What We Offer: Our compensation package includes comprehensive benefits, perks, and a competitive salary: We care about your personal life and we mean it. We offer vacation time, parental leave, team events, learning reimbursement, and more! Your growth at YipitData is determined by the impact that you are making, not by tenure, unnecessary facetime, or office politics. Everyone at YipitData is empowered to learn, self-improve, and master their skills in an environment focused on ownership, respect, and trust. We are committed to equal employment opportunity regardless of race, color, ancestry, religion, sex, national origin, sexual orientation, age, marital status, disability, gender, gender identity or expression, or veteran status. We are proud to be an equal-opportunity employer. Job Applicant Privacy Notice

Posted 3 weeks ago

Apply

0 years

0 Lacs

Hyderabad, Telangana, India

Remote

About YipitData: YipitData is the leading market research and analytics firm for the disruptive economy and recently raised up to $475M from The Carlyle Group at a valuation over $1B. We analyze billions of alternative data points every day to provide accurate, detailed insights on ridesharing, e-commerce marketplaces, payments, and more. Our on-demand insights team uses proprietary technology to identify, license, clean, and analyze the data many of the world's largest investment funds and corporations depend on. For three years and counting, we have been recognized as one of Inc's Best Workplaces . We are a fast-growing technology company backed by The Carlyle Group and Norwest Venture Partners. Our offices are located in NYC, Austin, Miami, Denver, Mountain View, Seattle , Hong Kong, Shanghai, Beijing, Guangzhou, and Singapore. We cultivate a people-centric culture focused on mastery, ownership, and transparency. We are hiring 2 senior software engineers (alternative title: senior backend engineer). One will join our Infrastructure Team, and the other will join our Data Feeds Team. As Our Senior Software Engineer in the Infrastructure Team, You Will be Responsible for: ERP Data Connector Specialist : You'll be the one implementing connectors to fetch ERP data. It's like building a bridge to a treasure trove of information. And not just that, you've got to ensure this data has high availability. So, whether it's the crack of dawn or the dead of night, the ERP data should be accessible without a hitch. Backend API Master : Customers' requirements are constantly evolving, and that's where you come in. You'll be implementing or upgrading backend APIs to fit these new demands like a custom-made glove. Your work will be the driving force behind our ability to keep up with the market. ERP Data Storage Maestro : Take charge of the ERP system's data storage. It's your kingdom, and you're responsible for all related improvements. Make sure the data is stored efficiently and securely, and is always ready for action. Business-Tech Liaison : Understand the business requirements inside out. Jump right into discussions with the team and bring your A-game to design technical solutions. You'll be the one who bridges the gap between business needs and technical implementation. Service Maintenance and Upgrade Champion : Maintain our existing services like a pro. Dive into iterative upgrades, deploy improvements, and take charge of service governance. Your work will be the glue that holds our services together and keeps them evolving. Global Team Player : Work hand-in-glove with our US/SG/China teams. Be flexible with work hours as the data world never sleeps, and we need you to be on top of your game, always! As Our Senior Software Engineer in the Data Feeds Team, You Will be Responsible for: Data Pipelines Maestro : You'll be responsible for developing, optimizing, and maintaining super-scalable data pipelines. Whether it's structured data flowing like a well-oiled stream or unstructured data that needs taming, you've got to make sure these pipelines are top-notch for seamless processing. Data Systems Guardian : Existing data systems and services. You bet you'll be the one maintaining and enhancing their stability, reliability, and ensuring high availability. Think of yourself as the shield that keeps our data world running smoothly. Data Architecture Builder : Team up with our amazing crew to construct and refine an expandable, high-performance data architecture. It's like building a digital skyscraper, but with data blocks! Data Services Provider : Partner with different teams across the board. Your mission is to serve up high-quality, rock-solid data services to all our internal users. They'll be relying on you like a lifeline for their data needs. Business-Aligned Data Designer : Get the hang of product and business requirements. Then, design and implement data functionalities that are not only useful but also come with super- intuitive data visualizations. Make data come alive! Third-Party Data Integrator : Oversee the integration and maintenance of collaborative data with our third-party clients. Solve their data mining and analytical headaches, and you'll be the hero of the data realm. Global Team Player : Work hand-in-glove with our US/SG/China teams. Be flexible with work hours as the data world never sleeps, and we need you to be on top of your game, always! Data Governance Enforcer : Enforce the best practices in data governance, security, and compliance. You're the gatekeeper of sensitive information, and you've got to keep it safe at all costs. This is a fully-remote opportunity based in India. Standard work hours are from 8 am to 5 pm IST. You Are Likely To Succeed If you have: Bachelor's degree in Computer Science, or related majors, 5+ yrs backend experience. Solid computer foundation and programming skills, familiar with common data structures and algorithms. Excellent in one of the following languages: Go/Python Familiarity with one of open source components: Mysql/Redis/Message Queue/Nosql. Familiarity with ElasticSearch OR Spark (for Data Feeds Team) Experience in architecture and developing large-scale distributed systems. (for Infrastructure Team) Excellent logic analysis capabilities, able to abstract and split business logic reasonably. Exposure to cloud infrastructure, such as kubernates/docker, Azure/AWS/GCP. Familiarity with ERP systems. What We Offer: Our compensation package includes comprehensive benefits, perks, and a competitive salary: We care about your personal life and we mean it. We offer vacation time, parental leave, team events, learning reimbursement, and more! Your growth at YipitData is determined by the impact that you are making, not by tenure, unnecessary facetime, or office politics. Everyone at YipitData is empowered to learn, self-improve, and master their skills in an environment focused on ownership, respect, and trust. We are committed to equal employment opportunity regardless of race, color, ancestry, religion, sex, national origin, sexual orientation, age, marital status, disability, gender, gender identity or expression, or veteran status. We are proud to be an equal-opportunity employer. Job Applicant Privacy Notice

Posted 3 weeks ago

Apply

4.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Introduction In this role, you'll work in one of our IBM Consulting Client Innovation Centers (Delivery Centers), where we deliver deep technical and industry expertise to a wide range of public and private sector clients around the world. Our delivery centers offer our clients locally based skills and technical expertise to drive innovation and adoption of new technology. Your Role And Responsibilities As an Data Engineer at IBM you will harness the power of data to unveil captivating stories and intricate patterns. You'll contribute to data gathering, storage, and both batch and real-time processing. Collaborating closely with diverse teams, you'll play an important role in deciding the most suitable data management systems and identifying the crucial data required for insightful analysis. As a Data Engineer, you'll tackle obstacles related to database integration and untangle complex, unstructured data sets. In This Role, Your Responsibilities May Include Implementing and validating predictive models as well as creating and maintain statistical models with a focus on big data, incorporating a variety of statistical and machine learning techniques Designing and implementing various enterprise search applications such as Elasticsearch and Splunk for client requirements Work in an Agile, collaborative environment, partnering with other scientists, engineers, consultants and database administrators of all backgrounds and disciplines to bring analytical rigor and statistical methods to the challenges of predicting behaviours. Build teams or writing programs to cleanse and integrate data in an efficient and reusable manner, developing predictive or prescriptive models, and evaluating modelling results Preferred Education Master's Degree Required Technical And Professional Expertise 4+ years of experience in data modelling, data architecture. Proficiency in data modelling tools Erwin, IBM Infosphere Data Architect and database management systems Familiarity with different data models like relational, dimensional and NoSQL databases. Understanding of business processes and how data supports business decision making. Strong understanding of database design principles, data warehousing concepts, and data governance practices Preferred Technical And Professional Experience Excellent analytical and problem-solving skills with a keen attention to detail. Ability to work collaboratively in a team environment and manage multiple projects simultaneously. Knowledge of programming languages such as SQL

Posted 3 weeks ago

Apply

6.0 years

0 Lacs

Delhi, India

On-site

Position Overview : We're looking for a highly skilled DevOps Engineer with 6+ years of experience in designing, implementing, and managing large-scale systems. The ideal candidate will have expertise in AWS Technology, Kafka, Docker, Kubernetes, CICD, Elastic Stack, and Grafana. The DevOps Engineer will play a key role in ensuring the scalability, reliability, and security of our platform. Responsibilities : Design, implement, and manage large-scale systems on AWS, including EC2, S3, ECR, ECS, AWS CloudWatch, API Gateway, and Lambda. Experience with Terraform for infrastructure automation by IaC, including writing and managing Terraform modules. Develop and maintain containerized applications using Docker and Kubernetes Design and implement CI/CD pipelines using Git and CI/CD tools Ensure the scalability, reliability, and security of our platform Collaborate with cross-functional teams to identify and prioritize system Requirements : Troubleshoot and resolve complex system issues Stay up-to-date with industry trends and emerging technologies Design, deploy, and maintain cloud infrastructure using Terraform while ensuring best practices and scalability. 6+ years of experience in system engineering or a related field In-depth knowledge of AWS Technology, including EC2, S3, ECR, ECS, AWS CloudWatch, API Gateway, and Lambda Experience with containerization using Docker and orchestration using Kubernetes Strong understanding of CI/CD pipelines and experience with Git and CI/CD tools Experience with monitoring and logging tools like Elk Stack (Elasticsearch, Logstash, Kibana and Grafana About Creator Bridge : At Creator Bridge, we are developing a social media platform that brings together users, creators, and brands. Users can share and explore content, shop easily, and connect in real-time. Creators can show their skills, earn money, and work with brands, while brands can run campaigns, partner with creators, and grow their audience. We focus on building real connections to inspire creativity and drive business.

Posted 3 weeks ago

Apply

5.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

🚨 We're Hiring! | QA Engineer 🚨 📍 Location: Chennai 🕒 Experience: 5+ years of hands-on experience in designing and developing test cases At Neeve , we’re redefining smart building technologies with cloud-connected solutions that make building operations intelligent, efficient, and future-ready. We are seeking a driven individual to join the Smart Building Cloud team within our IoT organization . If you're interested, we’d love to hear from you! Responsibilities Understand, test, and automate key test cases for our software-defined converged infrastructure product Collaborate with test leads, developers, deployment teams, and product managers to understand use cases and develop test cases and scenarios Own automation execution and regression testing to ensure the quality of solutions, including traffic inspection, network issue identification, and traffic auditing Perform system, performance, and stress testing Desired Skills and Experience Strong analytical and problem-solving skills 5+ years of experience in designing and developing test cases Hands-on experience/interest in: Software testing lifecycle, QA activities, build systems, regression testing, and test automation Scripting using Python and Shell User flow testing Networking: Firewalls, Routing/Switching/VLANs, Proxies/Load Balancers, TCP/IP, WAN & LAN Protocols, DHCP, DNS Linux systems Analyzing network traces Virtualization and containers: VMware, Hyper-V, Docker, OpenStack Cloud platforms: AWS, Azure, GCP Logging tools: Logstash, Elasticsearch, Kibana Git, Jenkins, Jira AI adoption in QA workflows—such as test automation, defect prediction, and log analysis to improve quality and efficiency (preferred) Education Bachelor’s or Master’s degree in Computer Science or a related field 📩 Interested candidates can send their resumes to: opportunities@neeve.ai Let’s build the future, together. 🌐

Posted 3 weeks ago

Apply

2.0 - 3.8 years

5 - 7 Lacs

Hyderābād

Remote

Our fast-paced and collaborative environment inspires us to create, think, and challenge each other in ways that make our solutions and our teams better. Whether you’re interested in engineering or development, marketing or sales, or something else – if this sounds like you, then we’d love to hear from you! We are headquartered in Denver, Colorado, with offices in the US, Canada, and India. DevOps II JD Vertafore is a leading technology company whose innovative software solution are advancing the insurance industry. Our suite of products provides solutions to our customers that help them better manage their business, boost their productivity and efficiencies, and lower costs while strengthening relationships. Our mission is to move InsurTech forward by putting people at the heart of the industry. We are leading the way with product innovation, technology partnerships, and focusing on customer success. Our fast-paced and collaborative environment inspires us to create, think, and challenge each other in ways that make our solutions and our teams better. We are headquartered in Denver, Colorado, with offices across the U.S., Canada, and India. JOB DESCRIPTION Does building out a top-tier DevOps team, and everything that comes with it, sound intriguing? DevOps senior software engineer/team lead embedded in an energetic DevOps agile team. Our DevOps teams are tightly coupled and integrated with the culture, tools, practices and patterns of the rest of our software engineering organization. They not only “keep the lights on” for our systems and networks, but also empower our other development teams with cutting edge tools and capabilities to bring Vertafore products to market as quickly as possible. All of this will be accomplished with cutting edge, lean-agile, software development methodologies. Core Requirements and Responsibilities: Essential job functions included but are not limited to the following: You will lead the team in building out our continuous delivery infrastructure and processes for all our products utilizing state of the art technologies. You will be hands on leading the architecture and design of the frameworks for the automated continuous deployment of application code, the operational and security monitoring and care of the infrastructure and software platforms. You and your team will serve as the liaison between the agile development teams, SaaS operations, and external cloud providers for deployment, operational efficiency, security, and business continuity. Why Vertafore is the place for you: *Canada Only The opportunity to work in a space where modern technology meets a stable and vital industry Medical, vision & dental plans Life, AD&D Short Term and Long Term Disability Pension Plan & Employer Match Maternity, Paternity and Parental Leave Employee and Family Assistance Program (EFAP) Education Assistance Additional programs - Employee Referral and Internal Recognition Why Vertafore is the place for you: *US Only The opportunity to work in a space where modern technology meets a stable and vital industry We have a Flexible First work environment! Our North America team members use our offices for collaboration, community and team-building, with members asked to sometimes come into an office and/or travel depending on job responsibilities. Other times, our teams work from home or a similar environment. Medical, vision & dental plans PPO & high-deductible options Health Savings Account & Flexible Spending Accounts Options: Health Care FSA Dental & Vision FSA Dependent Care FSA Commuter FSA Life, AD&D (Basic & Supplemental), and Disability 401(k) Retirement Savings Plain & Employer Match Supplemental Plans - Pet insurance, Hospital Indemnity, and Accident Insurance Parental Leave & Adoption Assistance Employee Assistance Program (EAP) Education & Legal Assistance Additional programs - Tuition Reimbursement, Employee Referral, Internal Recognition, and Wellness Commuter Benefits (Denver) The selected candidate must be legally authorized to work in the United States. The above statements are intended to describe the general nature and level of work being performed by people assigned to this job. They are not intended to be an exhaustive list of all the job responsibilities, duties, skill, or working conditions. In addition, this document does not create an employment contract, implied or otherwise, other than an "at will" relationship. Vertafore strongly supports equal employment opportunity for all applicants regardless of race, color, religion, sex, gender identity, pregnancy, national origin, ancestry, citizenship, age, marital status, physical disability, mental disability, medical condition, sexual orientation, genetic information, or any other characteristic protected by state or federal law. The Professional Services (PS) and Customer Success (CX) bonus plans are a quarterly monetary bonus plan based upon individual and practice performance against specific business metrics. Eligibility is determined by several factors including: start date, good standing in the company, and actives status at time of payout. The Vertafore Incentive Plan (VIP) is an annual monetary bonus for eligible employees based on both individual and company performance. Eligibility is determined by several factors including: start date, good standing in the company, and actives status at time of payout. Commission plans are tailored to each sales role but common components include quota, MBO's and ABPMs. Salespeople receive their formal compensation plan within 30 days of hire. Vertafore is a drug free workplace and conducts preemployment drug and background screenings. We do not accept resumes from agencies, headhunters or other suppliers who have not signed a formal agreement with us. We want to make sure our recruiting process is accessible for everyone. if you would like to contact us regarding the accessibility of our website or need assistance completing the application process, please contact recruiting@vertafore.com Just a note, this contact information is for accommodation requests only. Knowledge, Skills, Abilities and Qualifications: Bachelor’s degree in Computer Science (or related technical field) or equivalent practical experience 2 – 3.8 years professional experience in DevOps Have excellent communication and interpersonal skills and ability to work with other developers, business analysts, testing specialists and product owners to create stellar software products Have a strong sense of ownership. Strong diagnostic, analytical, and design skills. Closely follow industry trends and the open source community, identifying and proactively advocating for cutting edge tools that would optimize operational performance and/or reduce operating costs. Have experience in regulated environments. Care about quality and know what it means to ship high quality code and infrastructure. Be curious and avid learner Communicate clearly to explain and defend design decisions. Self-motivated and excellent problem-solvers. Driven to improve, personally and professionally. Mentor and inspire others to raise the bar for everyone around them. Love to collaborate with their peers, designing pragmatic solutions. Operate best in a fast-paced, flexible work environment. Experience with Agile software development. Experience in mission critical Cloud operations and/or DevOps engineering Have experience with AWS technologies and/or developing with Distributed Systems using Ansible, Puppet, or Jenkins. Strong understanding and experience working with Windows, Unix and Linux operating systems; specifically troubleshooting and providing administration. Have experience with operating and tuning relational and NoSQL databases Strong experience with Terraform, Jenkins. Have experience with performing support and administrative tasks within Amazon Web Services (AWS), Azure, OpenStack, or other cloud infrastructure technologies. Proficiency in managing systems within multiple sites including fail-over redundancy & autoscaling (knowledge of best practices and IT operations in an always-up, always-available service). Have experience deploying, maintaining and managing secure systems. A background in software development, preferably Web applications. Proficient in monitoring and logging tools such as ELK Stack (Elasticsearch, Logstash, and Kibana). Have experience with build & deploy tools (Jenkins). Have knowledge of IP networking, VPN's, DNS, load balancing and firewalling. Enjoy solving problems through the entire application stack. Have been on the front lines of production outages, both working to resolve the outage as well as root-cause the problem and provide long-term resolution or early identification strategies

Posted 3 weeks ago

Apply

0 years

0 Lacs

Hyderābād

Remote

About YipitData: YipitData is the leading market research and analytics firm for the disruptive economy and recently raised up to $475M from The Carlyle Group at a valuation over $1B. We analyze billions of alternative data points every day to provide accurate, detailed insights on ridesharing, e-commerce marketplaces, payments, and more. Our on-demand insights team uses proprietary technology to identify, license, clean, and analyze the data many of the world's largest investment funds and corporations depend on. For three years and counting, we have been recognized as one of Inc's Best Workplaces . We are a fast-growing technology company backed by The Carlyle Group and Norwest Venture Partners. Our offices are located in NYC, Austin, Miami, Denver, Mountain View, Seattle , Hong Kong, Shanghai, Beijing, Guangzhou, and Singapore. We cultivate a people-centric culture focused on mastery, ownership, and transparency. We are hiring 2 senior software engineers (alternative title: senior backend engineer). One will join our Infrastructure Team, and the other will join our Data Feeds Team. As Our Senior Software Engineer in the Infrastructure Team, You Will be Responsible for: ERP Data Connector Specialist : You'll be the one implementing connectors to fetch ERP data. It's like building a bridge to a treasure trove of information. And not just that, you've got to ensure this data has high availability. So, whether it's the crack of dawn or the dead of night, the ERP data should be accessible without a hitch. Backend API Master : Customers' requirements are constantly evolving, and that's where you come in. You'll be implementing or upgrading backend APIs to fit these new demands like a custom-made glove. Your work will be the driving force behind our ability to keep up with the market. ERP Data Storage Maestro : Take charge of the ERP system's data storage. It's your kingdom, and you're responsible for all related improvements. Make sure the data is stored efficiently and securely, and is always ready for action. Business-Tech Liaison : Understand the business requirements inside out. Jump right into discussions with the team and bring your A-game to design technical solutions. You'll be the one who bridges the gap between business needs and technical implementation. Service Maintenance and Upgrade Champion : Maintain our existing services like a pro. Dive into iterative upgrades, deploy improvements, and take charge of service governance. Your work will be the glue that holds our services together and keeps them evolving. Global Team Player : Work hand-in-glove with our US/SG/China teams. Be flexible with work hours as the data world never sleeps, and we need you to be on top of your game, always! As Our Senior Software Engineer in the Data Feeds Team, You Will be Responsible for: Data Pipelines Maestro : You'll be responsible for developing, optimizing, and maintaining super-scalable data pipelines. Whether it's structured data flowing like a well-oiled stream or unstructured data that needs taming, you've got to make sure these pipelines are top-notch for seamless processing. Data Systems Guardian : Existing data systems and services. You bet you'll be the one maintaining and enhancing their stability, reliability, and ensuring high availability. Think of yourself as the shield that keeps our data world running smoothly. Data Architecture Builder : Team up with our amazing crew to construct and refine an expandable, high-performance data architecture. It's like building a digital skyscraper, but with data blocks! Data Services Provider : Partner with different teams across the board. Your mission is to serve up high-quality, rock-solid data services to all our internal users. They'll be relying on you like a lifeline for their data needs. Business-Aligned Data Designer : Get the hang of product and business requirements. Then, design and implement data functionalities that are not only useful but also come with super- intuitive data visualizations. Make data come alive! Third-Party Data Integrator : Oversee the integration and maintenance of collaborative data with our third-party clients. Solve their data mining and analytical headaches, and you'll be the hero of the data realm. Global Team Player : Work hand-in-glove with our US/SG/China teams. Be flexible with work hours as the data world never sleeps, and we need you to be on top of your game, always! Data Governance Enforcer : Enforce the best practices in data governance, security, and compliance. You're the gatekeeper of sensitive information, and you've got to keep it safe at all costs. This is a fully-remote opportunity based in India. Standard work hours are from 8 am to 5 pm IST. You Are Likely To Succeed If you have: Bachelor's degree in Computer Science, or related majors, 5+ yrs backend experience. Solid computer foundation and programming skills, familiar with common data structures and algorithms. Excellent in one of the following languages: Go/Python Familiarity with one of open source components: Mysql/Redis/Message Queue/Nosql. Familiarity with ElasticSearch OR Spark (for Data Feeds Team) Experience in architecture and developing large-scale distributed systems. (for Infrastructure Team) Excellent logic analysis capabilities, able to abstract and split business logic reasonably. Exposure to cloud infrastructure, such as kubernates/docker, Azure/AWS/GCP. Familiarity with ERP systems. What We Offer: Our compensation package includes comprehensive benefits, perks, and a competitive salary: We care about your personal life and we mean it. We offer vacation time, parental leave, team events, learning reimbursement, and more! Your growth at YipitData is determined by the impact that you are making, not by tenure, unnecessary facetime, or office politics. Everyone at YipitData is empowered to learn, self-improve, and master their skills in an environment focused on ownership, respect, and trust. We are committed to equal employment opportunity regardless of race, color, ancestry, religion, sex, national origin, sexual orientation, age, marital status, disability, gender, gender identity or expression, or veteran status. We are proud to be an equal-opportunity employer. Job Applicant Privacy Notice

Posted 3 weeks ago

Apply

8.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Company Description Arista Networks is an industry leader in data-driven, client-to-cloud networking for large data center, campus and routing environments. Arista is a well-established and profitable company with over $8 billion in revenue. Arista’s award-winning platforms, ranging in Ethernet speeds up to 800G bits per second, redefine scalability, agility, and resilience. Arista is a founding member of the Ultra Ethernet consortium. We have shipped over 20 million cloud networking ports worldwide with CloudVision and EOS, an advanced network operating system. Arista is committed to open standards, and its products are available worldwide directly and through partners. At Arista, we value the diversity of thought and perspectives each employee brings. We believe fostering an inclusive environment where individuals from various backgrounds and experiences feel welcome is essential for driving creativity and innovation. Our commitment to excellence has earned us several prestigious awards, such as the Great Place to Work Survey for Best Engineering Team and Best Company for Diversity, Compensation, and Work-Life Balance. At Arista, we take pride in our track record of success and strive to maintain the highest quality and performance standards in everything we do. Job Description Who You’ll Work With CloudVision is Arista’s enterprise network management and streaming telemetry SaaS offering, serving the world’s largest Financials, Media and Entertainment, Health Care, and Cloud companies. As we continue to scale the service and expand into new markets, we’re looking to grow the team with experienced Software Engineers anchored by our Bangalore and Pune team. CloudVision’s core infrastructure is a scale-out distributed system providing real-time and historical access to the full network state, along with frameworks for building advanced analytics. It’s written in go and leverages open source technologies like HBase, ClickHouse, ElasticSearch, and Kafka under the covers. We’re constantly investing in scaling out the platform and building out richer analytics capabilities in the infrastructure. On top of this core platform we are building network management and analytics applications to fully automate today’s enterprise network, from CI/CD pipelines for network automation, to advanced analytics and remediation for network assurance. What You’ll Do As a backend software engineer at Arista, you own your project end to end. You and your project team will work with product management and customers to define the requirements and design the architecture. You’ll build the backend, write automated tests, and get it deployed into production via our CD pipeline. As a senior member of the team you’ll also be expected to help mentor and grow new team members. This role demands a strong and broad software engineering background, and you won’t be limited to any single aspect of the product or development process. Qualifications BS/MS degree in Computer Science and 8+ years of relevant experience. Strong knowledge of one or more of programming languages (Go, Python, Java) Experience developing distributed systems or scale out applications for a SaaS environment Experience developing scalable backend systems in Go is a plus Experience with network monitoring, network protocols, machine learning or data analytics is a Additional Information Arista stands out as an engineering-centric company. Our leadership, including founders and engineering managers, are all engineers who understand sound software engineering principles and the importance of doing things right. We hire globally into our diverse team. At Arista, engineers have complete ownership of their projects. Our management structure is flat and streamlined, and software engineering is led by those who understand it best. We prioritize the development and utilization of test automation tools. Our engineers have access to every part of the company, providing opportunities to work across various domains. Arista is headquartered in Santa Clara, California, with development offices in Australia, Canada, India, Ireland, and the US. We consider all our R&D centers equal in stature. Join us to shape the future of networking and be part of a culture that values invention, quality, respect, and fun.

Posted 3 weeks ago

Apply

2.0 - 4.0 years

1 - 4 Lacs

India

On-site

Microcode Software LLP At Microcode Software LLP, we are a dynamic and innovative software development company committed to delivering cutting-edge IT solutions to businesses worldwide. We pride ourselves on excellence and customer satisfaction , specializing in custom software, mobile apps, and web solutions tailored to meet unique client needs. Designation - Laravel Developer Experience - 2-4 Years Location - Dwarka mor Responsibilities: -Develop and enhance web applications using Laravel framework. -Design and implement efficient database schemas and table associations. -Optimize performance with Redis caching and integrate Elasticsearch for search capabilities. -Develop and maintain RESTful APIs for seamless data exchange. -Collaborate with teams to define and ship new features. -Troubleshoot and debug applications for optimal performance. -Write clean, maintainable code and conduct code reviews. -Stay updated with emerging technologies. Qualifications: -Proven experience as a Laravel Developer or similar role. -Strong understanding of database schema design and table associations. -Experience with Redis caching, Elasticsearch, and RESTful APIs. -Familiarity with Git and front-end technologies (HTML, CSS, JavaScript). -Strong problem-solving skills and ability to work independently. -Bachelor's degree in Computer Science or related field (or equivalent experience). How to Apply: Email your resume to hr@microcode.email . Contact: 9289680090 Join us and boost your career in a growing IT firm! Job Type: Full-time Pay: ₹15,211.15 - ₹34,504.89 per month Work Location: In person

Posted 3 weeks ago

Apply

6.0 years

6 - 9 Lacs

Gurgaon

On-site

DESCRIPTION Want to build at the cutting edge of immersive shopping experiences? The Visual Innovation Team (VIT) is at the center of all advanced visual and immersive content at Amazon. We're pioneering VR and AR shopping, CGI, and GenAI. We are looking for a Design Tecnologist who will help drive innovation in this space who understands technical problems the team may face from an artistic perspective and provide creative technical solutions. This role is for if you want to be a part of: Partnering with world-class creatives and scientists to drive innovation in content creation Developing and expanding Amazon’s VIT Virtual Production workflow. Building one of the largest content libraries on the planet Driving the success and adoption of emerging experiences across Amazon Key job responsibilities We are looking for a Design Technologist with a specialty in workflow automation using novel technologies like Gen-AI and CV. You will prototype and deliver creative solutions to the technical problems related to Amazon visuals. The right person will bring an implicit understanding of the balance needed between design, technology and creative professionals — helping scale video content creation within Amazon by enabling our teams - to work smarter, not harder. Design Technologists in this role will: Act as a bridge between creative and engineering disciplines to solve multi-disciplinary problems Work directly with videographers and studio production to develop semi-automated production workflows Collaborate with other tech artists and engineers to build and maintain a centralized suite of creative workflows and tooling Work with creative leadership to research, prototype and implement the latest industry trends that expand our production capabilities and improve efficiency A day in the life As a Design Technologist a typical day will include but is not limited to coding and development of tools, workflows, and automation to improve the creative crew's experience and increase productivity. This position will be focused on in house video creation, with virtual production and Gen-Ai workflows. You'll collaborate with production teams, observing, empathizing, and prototyping novel solutions. The ideal candidate is observant, creative, curious, and empathetic, understanding that problems often have multiple approaches. BASIC QUALIFICATIONS 6+ years of front-end technologist, engineer, or UX prototyper experience Have coding samples in front end programming languages Have an available online portfolio Experience developing visually polished, engaging, and highly fluid UX prototypes Experience collaborating with UX, Product, and technical partners PREFERRED QUALIFICATIONS Knowledge of databases and AWS database services: ElasticSearch, Redshift, DynamoDB Experience with machine learning (ML) tools and methods Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace accommodation or adjustment during the application and hiring process, including support for the interview or onboarding process, please visit https://amazon.jobs/content/en/how-we-hire/accommodations for more information. If the country/region you’re applying in isn’t listed, please contact your Recruiting Partner. Job details IND, HR, Gurgaon Amazon Design

Posted 3 weeks ago

Apply

3.0 years

0 Lacs

Gurgaon

On-site

DESCRIPTION We are part of the India & Emerging Stores Customer Fulfilment Experience Org. Team's mission is to address unique customer requirements and the increasing associated costs/abuse of returns and rejects for Emerging Stores. Our team implements tech solves that reduce the net cost of concessions/refunds - this includes buyer and seller abuse, costs associated with return/reject transportation, cost of contacts and operations cost at return centers. We have a huge opportunity to create a legacy and our Legacy Statement is to “transform ease and quality of living in India, thereby enabling its potential in the 21st century”. We also believe that we have an additional responsibility to “help Amazon become truly global in its perspective and innovations” by creating global best-in-class products/platforms that can serve our customers worldwide. This is an opportunity to join our mission to build tech solutions that empower sellers to delight the next billion customers. You will be responsible for building new system capabilities grounds up for strategic business initiatives. If you feel excited by the challenge of setting the course for large company wide initiatives, building and launching customer facing products in IN and other emerging markets, this may be the next big career move for you. We are building systems which can scale across multiple marketplaces and are on the state-of-the-art in automated large scale e-commerce business. We are looking for a SDE to deliver capabilities across marketplaces. We operate in a high performance agile ecosystem where SDEs, Product Managers and Principals frequently connect with end customers of our products. Our SDEs stay connected with customers through seller/FC/Deliver Station visits and customer anecdotes. This allows our engineers to significantly influence product roadmap, contribute to PRFAQs and create disproportionate impact through the tech they deliver. We offer Technology leaders a once in a lifetime opportunity to transform billions of lives across the planet through their tech innovation. As an engineer, you will help with the design, implementation, and launch of many key product features. You will get an opportunity to work on the wide range of technologies (including AWS Open Search, Lambda, ECS, SQS, Dynamo DB, Neptune etc.) and apply new technologies for solving customer problems. You will have an influence on defining product features, drive operational excellence, and spearhead the best practices that enable a quality product. You will get to work with highly skilled and motivated engineers who are already contributing to building high-scale and high-available systems. If you are looking for an opportunity to work on world-leading technologies and would like to build creative technology solutions that positively impact hundreds of millions of customers, and relish large ownership and diverse technologies, join our team today! As an engineer you will be responsible for: Ownership of product/feature end-to-end for all phases from the development to the production. Ensuring the developed features are scalable and highly available with no quality concerns. Work closely with senior engineers for refining the design and implementation. Management and execution against project plans and delivery commitments. Assist directly and indirectly in the continual hiring and development of technical talent. Create and execute appropriate quality plans, project plans, test strategies and processes for development activities in concert with business and project management efforts. Contribute intellectual property through patents. The candidate should be passionate engineer about delivering experiences that delight customers and creating solutions that are robust. He/she should be able to commit and own the deliveries end-to-end. About the team Team: IES NCRC Tech Mission: We own programs to prevent customer abuse for IN & emerging marketplaces. We detect abusive customers for known abuse patterns and apply interventions at different stages of buyer's journey like checkout, pre-fulfillment, shipment and customer contact (customer service). We closely partner with International machine learning team to build ML based solutions for above interventions. Vision: Our goal is to automate detection of new abuse patterns and act quickly to minimize financial loss to Amazon. This would act as a deterrent for abusers, while building trust for genuine customers. We use machine learning based models to automate the abuse detection in a scalable & efficient manner. Technologies: The ML models leveraged by the team include a vast variety ranging from regression-based (XgBoost), to deep-learning models (RNN, CNN) and use frameworks like PyTorch, TensorFlow, Keras for training & inference. Productionization of ML models for real-time low-latency high traffic use-cases poses unique challenges, which in turn makes the work exciting. In terms of tech stack, multiple AWS technologies are used, e.g. Sagemaker, ECS, Lambda, ElasticSearch, StepFunctions, AWS Batch, DynamoDB, S3, CDK (for infra), GraphDBs and are open to adopt new technologies as per use-case. BASIC QUALIFICATIONS 3+ years of non-internship professional software development experience 2+ years of non-internship design or architecture (design patterns, reliability and scaling) of new and existing systems experience Experience programming with at least one software programming language PREFERRED QUALIFICATIONS 3+ years of full software development life cycle, including coding standards, code reviews, source control management, build processes, testing, and operations experience Bachelor's degree in computer science or equivalent Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace accommodation or adjustment during the application and hiring process, including support for the interview or onboarding process, please visit https://amazon.jobs/content/en/how-we-hire/accommodations for more information. If the country/region you’re applying in isn’t listed, please contact your Recruiting Partner. Job details IND, HR, Gurugram Amazon.in Software Development

Posted 3 weeks ago

Apply

12.0 years

5 - 6 Lacs

Noida

Remote

Location: Noida, Uttar Pradesh, India Job ID: R0098886 Date Posted: 2025-07-07 Company Name: HITACHI INDIA PVT. LTD Profession (Job Category): Other Job Schedule: Full time Remote: No Job Description: Job Title: Solution Architect Designation : Senior Company: Hitachi Rail GTS India Location: Noida, UP, India Salary: As per Industry Company Overview: Hitachi Rail is right at the forefront of the global mobility sector following the acquisition. The closing strengthens the company's strategic focus on helping current and potential Hitachi Rail and GTS customers through the sustainable mobility transition – the shift of people from private to sustainable public transport, driven by digitalization. Position Overview: We are looking for a Solution Architect that will be responsible for translating business requirements into technical solutions, ensuring the architecture is scalable, secure, and aligned with enterprise standards. Solution Architect will play a crucial role in defining the architecture and technical direction of the existing system. you will be responsible for the design, implementation, and deployment of solutions that integrate with transit infrastructure, ensuring seamless fare collection, real-time transaction processing, and enhanced user experiences. You will collaborate with development teams, stakeholders, and external partners to create scalable, secure, and highly available software solutions. Job Roles & Responsibilities: Architectural Design : Develop architectural documentation such as solution blueprints, high-level designs, and integration diagrams. Lead the design of the system's architecture, ensuring scalability, security, and high availability. Ensure the architecture aligns with the company's strategic goals and future vision for public transit technologies. Technology Strategy : Select the appropriate technology stack and tools to meet both functional and non-functional requirements, considering performance, cost, and long-term sustainability. System Integration : Work closely with teams to design and implement the integration of the AFC system with various third-party systems (e.g., payment gateways, backend services, cloud infrastructure). API Design & Management : Define standards for APIs to ensure easy integration with external systems, such as mobile applications, ticketing systems, and payment providers. Security & Compliance : Ensure that the AFC system meets the highest standards of data security, particularly for payment information, and complies with industry regulations (e.g., PCI-DSS, GDPR). Stakeholder Collaboration : Act as the technical lead during project planning and discussions, ensuring the design meets customer and business needs. Technical Leadership : Mentor and guide development teams through best practices in software development and architectural principles. Performance Optimization : Monitor and optimize system performance to ensure the AFC system can handle high volumes of transactions without compromise. Documentation & Quality Assurance : Maintain detailed architecture documentation, including design patterns, data flow, and integration points. Ensure the implementation follows best practices and quality standards. Research & Innovation : Stay up to date with the latest advancements in technology and propose innovative solutions to enhance the AFC system. Skills (Mandatory): DotNet (C#), C/C++, Java, ASP.NET Core (C#), Angular, OAuth2 / OpenID Connect (Authentication & Authorization) JWT (JSON Web Tokens) Spring Cloud, Docker, Kubernetes, Relational Databases (MSSQL) Data Warehousing SOAP/RESTful API Design, Redis (Caching & Pub/Sub) Preferred Skills (Good to have): Python, Android SSL/TLS Encryption OWASP Top 10 (Security Best Practices) Vault (Secret Management) Keycloak (Identity & Access Management) Swagger (API Documentation) NoSQL Databases, GraphQL, gRPC, OpenAPI, Istio, Apache Kafka, RabbitMQ, Consul, DevOps & CI/CD Tools Tools & Technologies: UML (Unified Modeling Language) Lucidchart / Draw.io (Diagramming) PlantUML (Text-based UML generation) C4 Model (Software architecture model), Enterprise Architect (Modeling), Apache Hadoop / Spark (Big Data), Elasticsearch (Search Engine), Apache Kafka (Stream Processing), TensorFlow / PyTorch (Machine Learning/AI) Education: Bachelor's or Master’s degree in Computer Science, Information Technology, or a related field. Experience Required: 12+ years of experience in solution architecture or software design. Proven experience with enterprise architecture frameworks (e.g., TOGAF, Zachman). Strong understanding of cloud platforms (AWS, Azure, or Google Cloud). Experience in system integration, API design, microservices, and SOA. Familiarity with data modeling and database technologies (SQL, NoSQL). Strong communication and stakeholder management skills. Preferred: Certification in cloud architecture (e.g., AWS Certified Solutions Architect, Azure Solutions Architect Expert). Experience with DevOps tools and CI/CD pipelines. Knowledge of security frameworks and compliance standards (e.g., ISO 27001, GDPR). Experience in Agile/Scrum environments. Domain knowledge in [insert industry: e.g., finance, transportation, healthcare]. Soft Skills: Analytical and strategic thinking. Excellent problem-solving abilities. Ability to lead and mentor cross-functional teams. Strong verbal and written communication.

Posted 3 weeks ago

Apply

5.0 years

0 Lacs

Noida

On-site

Level AI was founded in 2019 and is a Series C startup headquartered in Mountain View, California. Level AI revolutionizes customer engagement by transforming contact centers into strategic assets. Our AI-native platform leverages advanced technologies such as Large Language Models to extract deep insights from customer interactions. By providing actionable intelligence, Level AI empowers organizations to enhance customer experience and drive growth. Consistently updated with the latest AI innovations, Level AI stands as the most adaptive and forward-thinking solution in the industry. Position Overview : We are looking for a Lead Software Engineer to help raise the engineering bar for the entire technology stack at Level AI, including applications, platform and infrastructure. They will actively collaborate with team members and the wider Level AI engineering community to develop highly scalable and performant systems. They will be a technical thought leader who will help drive solving complex problems of today and the future by designing and building simple and elegant technical solutions. They will coach and mentor junior engineers and drive engineering best practices. They will actively collaborate with product managers and other stakeholders both inside and outside the team. Competencies : Large Scale Distributed systems, Search (such as Elasticsearch), High scale messaging systems (such as Kafka), Real time Job Queues, High throughput and Low latency systems, Python, Django, Relational databases (such as PostgreSQL), data modeling, DB query Optimization, Caching, Redis, Celery, CI/CD, GCP, Kubernetes, Docker. Responsibilities - Develop and execute the technical roadmap to scale Level AI’s technology stack. Design and build highly scalable and low latency distributed systems to process large scale real time data. Drive best-in class engineering practices through the software development lifecycle. Drive operational excellence for critical services that need to have high uptime. Collaborate with a variety of stakeholders within and outside engineering to create technical plans to deliver on important business goals, and lead the execution of those. Stay up to date with the latest technologies and thoughtfully apply them to Level AI’s tech stacks. Requirement - Qualification: B.E/B.Tech/M.E/M.Tech/PhD from tier 1/2 Engineering institutes with relevant work experience with a top technology company. 5+ years of Backend and Infrastructure Experience with strong track record in development, architecture and design Hands-on experience with large-scale databases, high-scale messaging systems and real time Job Queues. Experience navigating and understanding large scale systems and complex code-bases, and architectural patterns. Experience mentoring and providing technical leadership to other engineers in the team. Google pub-sub along with kafka in the queues part Flask, FastAPI, Django Auto-Scaling in distributed systems Nice to have: Experience with Google Cloud, Django, Postgres, Celery, Redis. Some experience with AI Infrastructure and Operations. Compensation : We offer market-leading compensation, based on the skills and aptitude of the candidate. To learn more visit : https://thelevel.ai/ Funding : https://www.crunchbase.com/organization/level-ai LinkedIn : https://www.linkedin.com/company/level-ai/ Our AI platform : https://www.youtube.com/watch?v=g06q2V_kb-s

Posted 3 weeks ago

Apply

10.0 - 12.0 years

5 - 7 Lacs

Ahmedabad

On-site

We are seeking a highly skilled and experienced Full Stack Technical Lead to drive the design, development, and delivery of scalable and robust full-stack applications. The candidate will bring deep expertise in backend technologies, frontend frameworks, and cloud-based solutions. Job Description In your new role you will: Design, implement and own modules to meet the quality, timeline and process requirements. Lead the end-to-end architecture, design, and development of enterprise-grade full-stack applications. Provide technical guidance, mentorship, and support to development teams, ensuring adherence to coding standards and best practices. Collaborate with cross-functional teams and various stakeholders to ensure project success. Follow Agile/Kanban methodology for the development. Clear understanding of requirements from various stakeholders for implementation. Your Profile You are best equipped for this task if you have: 10-12 years of experience in software development, with proven expertise in full-stack technologies. A minimum of 5 years of hands-on experience with Spring Boot. At least 1-2 years of experience working with React. Backend Development: Expertise in building and managing microservices architecture. In-depth knowledge in Spring Boot, Spring Security and related modules. Strong knowledge of Core Java, OOP principles, JUnit, Mockito, and design patterns. Frontend Development: (min 2year) Hands-on experience with React.js for building scalable and responsive web applications. Basic understanding of responsive web design, state management libraries like Redux, and frontend performance optimization. Database Management: Good experience with relational databases , particularly PostgreSQL. Knowledge of database performance tuning and optimization techniques. Cloud & DevOps: Experience with AWS services such as EC2, S3, RDS, API Gateway, CloudWatch, and Elasticsearch. Proficiency with containerization (Docker) and orchestration ( Kubernetes ). Familiarity with CI/CD pipelines and version control systems like Git. Familiarity with Infrastructure as Code (IaC) tools like Terraform. Good to have - Testing & Monitoring: Basic understanding with automated testing tools like Selenium and API testing tools. Familiarity with performance testing tools like JMeter. Payment domain knowledge. Contact: Padmashali.external2@infineon.com #WeAreIn for driving decarbonization and digitalization. As a global leader in semiconductor solutions in power systems and IoT, Infineon enables game-changing solutions for green and efficient energy, clean and safe mobility, as well as smart and secure IoT. Together, we drive innovation and customer success, while caring for our people and empowering them to reach ambitious goals. Be a part of making life easier, safer and greener. Are you in? We are on a journey to create the best Infineon for everyone. This means we embrace diversity and inclusion and welcome everyone for who they are. At Infineon, we offer a working environment characterized by trust, openness, respect and tolerance and are committed to give all applicants and employees equal opportunities. We base our recruiting decisions on the applicant´s experience and skills. Learn more about our various contact channels. Please let your recruiter know if they need to pay special attention to something in order to enable your participation in the interview process. Click here for more information about Diversity & Inclusion at Infineon.

Posted 3 weeks ago

Apply

0 years

0 Lacs

Calcutta

Remote

About YipitData: YipitData is the leading market research and analytics firm for the disruptive economy and recently raised up to $475M from The Carlyle Group at a valuation over $1B. We analyze billions of alternative data points every day to provide accurate, detailed insights on ridesharing, e-commerce marketplaces, payments, and more. Our on-demand insights team uses proprietary technology to identify, license, clean, and analyze the data many of the world's largest investment funds and corporations depend on. For three years and counting, we have been recognized as one of Inc's Best Workplaces . We are a fast-growing technology company backed by The Carlyle Group and Norwest Venture Partners. Our offices are located in NYC, Austin, Miami, Denver, Mountain View, Seattle , Hong Kong, Shanghai, Beijing, Guangzhou, and Singapore. We cultivate a people-centric culture focused on mastery, ownership, and transparency. We are hiring 2 senior software engineers (alternative title: senior backend engineer). One will join our Infrastructure Team, and the other will join our Data Feeds Team. As Our Senior Software Engineer in the Infrastructure Team, You Will be Responsible for: ERP Data Connector Specialist : You'll be the one implementing connectors to fetch ERP data. It's like building a bridge to a treasure trove of information. And not just that, you've got to ensure this data has high availability. So, whether it's the crack of dawn or the dead of night, the ERP data should be accessible without a hitch. Backend API Master : Customers' requirements are constantly evolving, and that's where you come in. You'll be implementing or upgrading backend APIs to fit these new demands like a custom-made glove. Your work will be the driving force behind our ability to keep up with the market. ERP Data Storage Maestro : Take charge of the ERP system's data storage. It's your kingdom, and you're responsible for all related improvements. Make sure the data is stored efficiently and securely, and is always ready for action. Business-Tech Liaison : Understand the business requirements inside out. Jump right into discussions with the team and bring your A-game to design technical solutions. You'll be the one who bridges the gap between business needs and technical implementation. Service Maintenance and Upgrade Champion : Maintain our existing services like a pro. Dive into iterative upgrades, deploy improvements, and take charge of service governance. Your work will be the glue that holds our services together and keeps them evolving. Global Team Player : Work hand-in-glove with our US/SG/China teams. Be flexible with work hours as the data world never sleeps, and we need you to be on top of your game, always! As Our Senior Software Engineer in the Data Feeds Team, You Will be Responsible for: Data Pipelines Maestro : You'll be responsible for developing, optimizing, and maintaining super-scalable data pipelines. Whether it's structured data flowing like a well-oiled stream or unstructured data that needs taming, you've got to make sure these pipelines are top-notch for seamless processing. Data Systems Guardian : Existing data systems and services. You bet you'll be the one maintaining and enhancing their stability, reliability, and ensuring high availability. Think of yourself as the shield that keeps our data world running smoothly. Data Architecture Builder : Team up with our amazing crew to construct and refine an expandable, high-performance data architecture. It's like building a digital skyscraper, but with data blocks! Data Services Provider : Partner with different teams across the board. Your mission is to serve up high-quality, rock-solid data services to all our internal users. They'll be relying on you like a lifeline for their data needs. Business-Aligned Data Designer : Get the hang of product and business requirements. Then, design and implement data functionalities that are not only useful but also come with super- intuitive data visualizations. Make data come alive! Third-Party Data Integrator : Oversee the integration and maintenance of collaborative data with our third-party clients. Solve their data mining and analytical headaches, and you'll be the hero of the data realm. Global Team Player : Work hand-in-glove with our US/SG/China teams. Be flexible with work hours as the data world never sleeps, and we need you to be on top of your game, always! Data Governance Enforcer : Enforce the best practices in data governance, security, and compliance. You're the gatekeeper of sensitive information, and you've got to keep it safe at all costs. This is a fully-remote opportunity based in India. Standard work hours are from 8 am to 5 pm IST. You Are Likely To Succeed If you have: Bachelor's degree in Computer Science, or related majors, 5+ yrs backend experience. Solid computer foundation and programming skills, familiar with common data structures and algorithms. Excellent in one of the following languages: Go/Python Familiarity with one of open source components: Mysql/Redis/Message Queue/Nosql. Familiarity with ElasticSearch OR Spark (for Data Feeds Team) Experience in architecture and developing large-scale distributed systems. (for Infrastructure Team) Excellent logic analysis capabilities, able to abstract and split business logic reasonably. Exposure to cloud infrastructure, such as kubernates/docker, Azure/AWS/GCP. Familiarity with ERP systems. What We Offer: Our compensation package includes comprehensive benefits, perks, and a competitive salary: We care about your personal life and we mean it. We offer vacation time, parental leave, team events, learning reimbursement, and more! Your growth at YipitData is determined by the impact that you are making, not by tenure, unnecessary facetime, or office politics. Everyone at YipitData is empowered to learn, self-improve, and master their skills in an environment focused on ownership, respect, and trust. We are committed to equal employment opportunity regardless of race, color, ancestry, religion, sex, national origin, sexual orientation, age, marital status, disability, gender, gender identity or expression, or veteran status. We are proud to be an equal-opportunity employer. Job Applicant Privacy Notice

Posted 3 weeks ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies