Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
5.0 - 9.0 years
0 Lacs
karnataka
On-site
As a Firewall Engineer Cloud Support Management at our organization in Bangalore, you will play a crucial role in managing and securing cloud infrastructure across AWS, Azure, and GCP. Your responsibilities will include overseeing firewall systems, provisioning cloud infrastructure, responding to incidents, implementing automation through Infrastructure as Code (IaC), and ensuring security compliance and policy enforcement. In the realm of Cloud Infrastructure & Firewall Management, you will be tasked with administering, configuring, and securing firewall systems in both cloud and hybrid environments. Additionally, you will oversee the provisioning, monitoring, and lifecycle management of cloud infrastructure to ensure performance and operational resilience. Your role will involve responding to cloud infrastructure incidents and security alerts, identifying root causes, and implementing long-term remediation strategies. Furthermore, you will drive automation and IaC initiatives to enhance infrastructure deployment, optimize costs, and enforce policies effectively. Your expertise in Infrastructure as Code & CI/CD Integration will be demonstrated through the use of Terraform to define and manage cloud infrastructure, including firewall configurations. You will also be responsible for building and maintaining CI/CD pipelines using tools like GitHub Actions, Jenkins, or Azure DevOps to streamline deployment processes and security updates. It will be essential to store and manage IaC in version control systems to ensure traceability, compliance, and policy adherence. Security, Compliance & Policy Enforcement will be a critical aspect of your role, where you will implement strong access control, segmentation, security policies, and firewall/ACL configurations in cloud environments. You will proactively identify security risks and recommend mitigation strategies for hybrid-cloud deployments. Additionally, you will deploy monitoring solutions for cloud and firewall security visibility, define key reliability and security metrics, and conduct audits of firewall configurations to maintain compliance and transparency. Collaboration & Knowledge Sharing will be key components of your responsibilities, requiring coordination with infrastructure, DevOps, security, compliance, and engineering teams to resolve incidents and improve architectural resilience. You will create SOPs and playbooks, contribute to team knowledge-sharing efforts, and support continuous efficiency improvement within the organization. To excel in this role, you should possess at least 5 years of experience in cloud support, firewall engineering, or related network security roles, particularly in AWS, Azure, and GCP hybrid environments. Additionally, relevant education/certifications such as AWS Solutions Architect/DevOps Engineer, Google Cloud Professional Cloud Architect/Engineer, and Azure Administrator/Solutions Architect will be beneficial. Proficiency in technical skills like Terraform, CI/CD tooling, firewall technologies, network segmentation, VPNs, and expertise in infrastructure security best practices are essential. Strong analytical, problem-solving, communication, and collaboration skills will also be crucial for successful cross-functional teamwork and efficiency improvement.,
Posted 1 day ago
0 years
0 Lacs
Chandigarh, India
On-site
About The Role We are seeking a highly experienced and hands-on Fullstack Architect to lead the design and architecture of scalable, enterprise-grade software solutions. This role requires a deep understanding of both frontend and backend technologies, cloud infrastructure, and microservices, with the ability to guide teams through technical challenges and solution Responsibilities : Architect, design, and oversee the development of full-stack applications using modern JS frameworks and cloud-native tools. Lead microservice architecture design, ensuring system scalability, reliability, and performance. Evaluate and implement AWS services (Lambda, ECS, Glue, Aurora, API Gateway, etc.) for backend solutions. Provide technical leadership to engineering teams across all layers (frontend, backend, database). Guide and review code, perform performance optimization, and define coding standards. Collaborate with DevOps and Data teams to integrate services (Redshift, OpenSearch, Batch). Translate business needs into technical solutions and communicate with cross-functional Skills : Deep expertise in Node.js, TypeScript, React.js, Python, Redux, and Jest. Proven experience designing and deploying systems using Microservices architecture. Strong understanding of AWS services : API Gateway, ECS, Lambda, Aurora, Glue, SQS, OpenSearch, Batch. Hands-on with MySQL, Redshift, and writing optimized queries. Advanced knowledge of HTML, CSS, Bootstrap, JavaScript. Familiarity with tools: VS Code, DataGrip, Jira, GitHub, Postman. Strong knowledge of architectural design patterns and security best : Experience working in fast-paced product development or startup environments. Strong communication and mentoring skills. (ref:hirist.tech)
Posted 1 day ago
0.0 - 2.0 years
0 Lacs
Pune, Maharashtra, India
On-site
The Applications Development Programmer Analyst is an intermediate level position responsible for participation in the establishment and implementation of new or revised application systems and programs in coordination with the Technology team. The overall objective of this role is to contribute to applications systems analysis and programming activities. Responsibilities: Utilize knowledge of applications development procedures and concepts, and basic knowledge of other technical areas to identify and define necessary system enhancements Identify and analyze issues, make recommendations, and implement solutions Utilize knowledge of business processes, system processes, and industry standards to solve complex issues Analyze information and make evaluative judgements to recommend solutions and improvements Conduct testing and debugging, utilize script tools, and write basic code for design specifications Assess applicability of similar experiences and evaluate options under circumstances not covered by procedures Develop working knowledge of Citi’s information systems, procedures, standards, client server application development, network operations, database administration, systems administration, data center operations, and PC-based applications Appropriately assess risk when business decisions are made, demonstrating particular consideration for the firm's reputation and safeguarding Citigroup, its clients and assets, by driving compliance with applicable laws, rules and regulations, adhering to Policy, applying sound ethical judgment regarding personal behavior, conduct and business practices, and escalating, managing and reporting control issues with transparency. Qualifications: 0-2 years of relevant experience Experience in programming/debugging used in business applications Working knowledge of industry practice and standards Comprehensive knowledge of specific business area for application development Working knowledge of program languages Consistently demonstrates clear and concise written and verbal communication Education: Bachelor’s degree/University degree or equivalent experience This job description provides a high-level review of the types of work performed. Other job-related duties may be assigned as required. Job Description : We are looking for a Cloud & Middleware Infrastructure DevOps Engineer to lead the migration of on-premise application infrastructure to private cloud platforms like AWS or GCP. The ideal candidate will have strong expertise in WebSphere, Apache Tomcat, Linux (RHEL), and OpenShift, along with deep experience in DevOps automation and containerization. Responsibilities: Administer and support WebSphere Application Server and Apache Tomcat environments. Manage Linux-based infrastructure (RHEL) and OpenShift container platforms. Design, build, and optimize CI/CD pipelines using GitHub and Jenkins. Lead the migration of on-premise middleware and application workloads to AWS, GCP, or private cloud environments. Containerize traditional applications using Docker and Kubernetes. Automate operational tasks using Shell, Ruby, or Python scripting. Implement high availability, load balancing, clustering, and failover strategies. Conduct vulnerability assessments and deploy cybersecurity best practices. Skills Required: Minimum 1.5- 5 years of relevant experience. Important configuration files , datasource , SSL certificates configurations. Application code deployments and roll back. Strong background in WebSphere, Tomcat, Linux (RHEL), OpenShift, GitHub, Jenkins, Docker, and Kubernetes. Knowledge of single sign of concepts and basic configuration. Proven experience migrating infrastructure and applications to cloud environments (AWS, GCP, or private cloud). Proficient in scripting for automation (Shell, Ruby, Python). Knowledge of security protocols, certificate management, and SSO integration. Deep understanding of DevOps, cloud-native operations, and modern infrastructure tooling. ------------------------------------------------------ Job Family Group: Technology ------------------------------------------------------ Job Family: Applications Development ------------------------------------------------------ Time Type: Full time ------------------------------------------------------ Most Relevant Skills Please see the requirements listed above. ------------------------------------------------------ Other Relevant Skills For complementary skills, please see above and/or contact the recruiter. ------------------------------------------------------ Citi is an equal opportunity employer, and qualified candidates will receive consideration without regard to their race, color, religion, sex, sexual orientation, gender identity, national origin, disability, status as a protected veteran, or any other characteristic protected by law. If you are a person with a disability and need a reasonable accommodation to use our search tools and/or apply for a career opportunity review Accessibility at Citi. View Citi’s EEO Policy Statement and the Know Your Rights poster.
Posted 1 day ago
5.0 - 7.0 years
0 Lacs
Pune, Maharashtra, India
Remote
About The Role We are actively seeking talented Amazon Connect Contact Center Associates/Engineers at various experience levels for one of our esteemed multinational clients. Amazon Connect Tech Lead with 5-7 years experience. Experience with Amazon Connect (Contact flows, queues, routing profile). Integration experience with Salesforce (Service Cloud Voice). Hands-on with AWS services. AWS Lambda (for backend logic) Programming language Java or Python. Amazon Lex (Voice/chat bots). Amazon Kinesis (for call analytics and streaming). Amazon S3 (Storage). Amazon DynamoDB. CloudWatch. IAM. Proficiency in programming scripting Java Script, Nodejs. Familiarity with CICD pipelines DevOps and infrastructure as code Terraform CloudFormation. Knowledge of contact center KPIs and analytics dashboards. Solid experience in REST APIs and integration frameworks. Experience working in Agile Scrum environments. Experience in Automating deployments of contact center setup using CI/CD pipelines or Terraform. Lead the end to end implementation and configuration of Amazon Connect. Understanding of telephony concepts SIP DID ACD IVR CTI. AWS Certified Solutions Architect or Amazon Connect certification. Why Join IOWeb3 Technologies? Work with a team that values integrity, collaboration, and excellence. Opportunity to contribute to impactful projects for a multinational client. Flexible work arrangements (Remote, Hybrid, or Onsite options). Be part of a company that is at the forefront of digital solutions and customer experience. Apply Now If you're an Amazon Connect specialist looking for your next challenge, apply today! Please indicate your relevant experience level in your application. (ref:hirist.tech)
Posted 1 day ago
2.0 - 4.0 years
0 Lacs
Pune, Maharashtra, India
On-site
We are looking for a highly skilled Generative AI Developer with expertise in Large Language Models (LLMs) to join our AI/ML innovation team. The ideal candidate will be responsible for building, fine-tuning, deploying, and optimizing generative AI models to solve complex real-world problems. Responsibilities You will collaborate with data scientists, machine learning engineers, product managers, and software developers to drive forward next-generation AI-powered Responsibilities : Design and develop AI-powered applications using large language models (LLMs) such as GPT, LLaMA, Mistral, Claude, or similar. Fine-tune pre-trained LLMs for specific tasks (e.g., text summarization, Q&A systems, chatbots, semantic search). Build and integrate LLM-based APIs into products and systems. Optimize inference performance, latency, and throughput of LLMs for deployment at scale. Conduct prompt engineering and design strategies for prompt optimization and output consistency. Develop evaluation frameworks to benchmark model quality, response accuracy, safety, and bias. Manage training data pipelines and ensure data privacy, compliance, and quality standards. Experiment with open-source LLM frameworks and contribute to internal libraries and tools. Collaborate with MLOps teams to automate deployment, CI/CD pipelines, and monitoring of LLM solutions. Stay up to date with state-of-the-art advancements in generative AI, NLP, and foundation Skills Required Skills : LLMs & Transformers: Deep understanding of transformer-based architectures (e.g., GPT, BERT, T5, LLaMA, Falcon). Model Training/Fine-Tuning: Hands-on experience with training/fine-tuning large models using libraries such as Hugging Face Transformers, DeepSpeed, LoRA, PEFT. Prompt Engineering: Expertise in designing, testing, and refining prompts for specific tasks and outcomes. Python: Strong proficiency in Python with experience in ML and NLP libraries. Frameworks: Experience with PyTorch, TensorFlow, Hugging Face, LangChain, or similar frameworks. MLOps: Familiarity with tools like MLflow, Kubeflow, Airflow, or SageMaker for model lifecycle management. Data Handling: Experience with data pipelines, preprocessing, and working with structured and unstructured Desirable Skills : Deployment: Knowledge of deploying LLMs on cloud platforms like AWS, GCP, Azure, or edge devices. Vector Databases: Experience with FAISS, Pinecone, Weaviate, or ChromaDB for semantic search applications. LLM APIs: Experience integrating with APIs like OpenAI, Cohere, Anthropic, Mistral, etc. Containerization: Docker, Kubernetes, and cloud-native services for scalable model deployment. Security & Ethics: Understanding of LLM security, hallucination handling, and responsible AI : Bachelors or Masters degree in Computer Science, Artificial Intelligence, Machine Learning, or related field. 2-4 years of experience in ML/NLP roles with at least 12 years specifically focused on generative AI and LLMs. Prior experience working in a research or product-driven AI team is a plus. Strong communication skills to explain technical concepts and findings Skills : Analytical thinker with a passion for solving complex problems. Team player who thrives in cross-functional settings. Self-driven, curious, and always eager to learn the latest advancements in AI. Ability to work independently and deliver high-quality solutions under tight deadlines (ref:hirist.tech)
Posted 1 day ago
8.0 years
0 Lacs
Pune, Maharashtra, India
Remote
Position Summary: We are looking for a highly skilled and experienced Data Engineer who focus on leading the development, and implementation of our Data Warehouse/Lakehouse solution, ensuring it serves as the foundation for scalable, high-performance analytics. Responsibilities: Lakehouse Design & Implementation: Lead the end-to-end development and deployment of a scalable and secure Lakehouse architecture. Define best practices for data ingestion, storage, transformation, and processing using modern cloud technologies. Architect data pipelines using ETL/ELT frameworks to support structured, semi-structured, and unstructured data. Optimize data modeling strategies to meet the analytical and performance needs of stakeholders. Evaluate and select appropriate cloud technologies, frameworks, and architectures. Requirement: Experience: 8+ years of experience in data engineering, with a proven track record of implementing large-scale data solutions. Extensive experience with cloud platforms (AWS, GCP, or Azure), specifically in data warehouse/lakehouse implementations. Expertise in modern data architectures with tools like Databricks, Snowflake, or BigQuery. Strong background in SQL, Python, and distributed computing frameworks (Spark, Dataflow, etc.). In-depth knowledge of data modeling principles (e.g., Star Schema, Snowflake Schema). Experience in enabling AI tools to consume data from the Lakehouse. About Aumni Techworks: Established in 2016, Aumni Techworks partners with its multinational clients to incubate and operate remote teams in India using the AumniBOT model. With a team of 250 and growing, our mission is to provide a quality alternative to project-based outsourcing. Benefits of working at Aumni Techworks: Work within a product team on cutting edge tech with one of the best pay packages. No politics, no bench, voice your opinion, flat hierarchy, and global exposure Work environment to re-live our fun college days (awarded as Best culture by Pune Mirror) Recharge frequently with Friday socials, dance classes, theme parties and monsoon picnic. Breakout spaces at the office – Gym, Pool, TT, Foosball and Carrom Health focused – Insurance coverage and get in shape with AumniFit (Do not miss our 4 PM plank!)
Posted 1 day ago
10.0 years
0 Lacs
Pune, Maharashtra, India
On-site
Role : Data Engineer (AWS Tech Stack) Location : Pune (WFO) Experience : 10+ Years Job Description We are seeking a highly skilled Data Integration Architect to join our team. The ideal candidate will have extensive experience with the AWS tech stack and a strong background in data integration, ETL processes, and cloud architecture. As a Data Integration Architect, you will be responsible for designing, implementing, and managing data integration solutions to support our organization's data strategy. Roles & Responsibilities And Design Design and implement data integration solutions using AWS services. Develop architecture blueprints and detailed documentation. Ensure solutions are scalable, secure, and aligned with best practices. Implementation of Logs, Alerting and Monitoring at various stages of Data flow. Data Management Manage and optimize ETL processes to ensure efficient data flow. Integrate data from various sources into data lakes, data warehouses, and other storage solutions. Ensure data quality, consistency, and integrity across the data lifecycle. Data Pipeline Management Ensuring the data ingestions and pipelines are robust and scalable for huge data sets. Collaboration And Leadership Work closely with data engineers, data scientists, and other stakeholders to understand data requirements and deliver solutions. Lead technical discussions and provide guidance to the development team. Conduct code reviews and ensure adherence to coding standards. Performance Optimization Monitor and optimize the performance of data integration processes. Troubleshoot and resolve issues related to data integration and ETL pipelines. Implement and manage data security and compliance measures. Semantic and Aggregation layer design and review. Dashboard performance tuning (Looker). Qualifications Must have and Experience : Bachelors or Masters degree in Computer Science, Information Technology, or related field. Minimum of 7 years of experience in data integration and ETL processes. Extensive experience with AWS services such as AWS Glue, AWS Lambda, Amazon Redshift, Amazon S3, and AWS Data Pipeline. Technical Skills Proficiency in SQL and experience with relational databases (e.g., MySQL, PostgreSQL). Strong programming skills in Python. Experience with data modeling, data warehousing, and big data technologies. Familiarity with data governance and data security best practices. Soft Skills Excellent problem-solving and analytical skills. Strong communication and collaboration abilities. Ability to work independently and as part of a team. Good To Have AWS Certified Solutions Architect or equivalent certification. Experience with machine learning and AI technologies. Familiarity with other cloud platforms (Google Cloud). Experience and exposure to US health care and Life science. GCP Look ML. (ref:hirist.tech)
Posted 1 day ago
5.0 years
0 Lacs
Pune, Maharashtra, India
On-site
Job Description We are seeking a highly motivated Python & AWS Developer with at least 5 years of professional experience in designing, developing, and deploying scalable applications. The ideal candidate should have strong hands-on experience with Python programming and AWS cloud services, along with a solid understanding of application architecture and cloud-native development Responsibilities : Develop, test, and maintain backend applications using Python. Design and implement AWS-based solutions using services such as EC2, Lambda, S3, RDS, API Gateway, Cloud Watch, etc. Collaborate with cross-functional teams to gather requirements and deliver high-quality software solutions. Optimize applications for performance, scalability, and security. Write clean, reusable, and efficient code following best practices. Deploy, monitor, and troubleshoot applications in AWS. Participate in code reviews and provide constructive Skills & Qualifications : 5+ years of experience in software development with Python. Hands-on experience with AWS cloud services (EC2, S3, Lambda, RDS, IAM, API ) Familiarity with RESTful APIs development and integration. Strong understanding of version control systems (Git). Experience with relational databases (MySQL/PostgreSQL). Knowledge of containerization (Docker) and CI/CD pipelines is a plus. Good problem-solving and analytical skills. (ref:hirist.tech)
Posted 1 day ago
4.0 - 8.0 years
0 Lacs
karnataka
On-site
You should have 4-7 years of experience in Python, PySpark, AWS, SQL, DSA, Databricks, Hadoop, Spark, MapReduce, and HBase. The job is located in Whitefield, Bangalore, and follows a hybrid work mode. Your responsibilities will include implementing good development practices, being a hands-on coder proficient in Python or PySpark, having experience in Big Data technologies like Hadoop, MapReduce, Spark, HBase, and ElasticSearch. You should have a good understanding of programming principles, development practices, and be able to grasp new concepts and technologies for large-scale engineering developments. Additionally, you should excel in application development, support, integration development, and data management. As part of the culture, you are expected to be a strategic thinker, analytical, data-driven, possess raw intellect, talent, and energy, and understand the demands of a private, high-growth company. You should be both a leader and a hands-on "doer." To qualify for this position, you must have a track record of relevant work experience in computer science or a related technical discipline. Functional and object-oriented programming experience in Python or PySpark is essential. You should also have hands-on experience in the Big Data stack, good understanding of AWS services, and experience working with APIs and microservices. Effective communication skills, collaboration abilities with engineers, data scientists, and product managers, as well as comfort in a fast-paced startup environment are also required. Preferred qualifications include experience in agile methodology, database modeling and development, data mining, warehousing, architecture, and delivery of enterprise-scale applications. You should also be capable of developing frameworks, design patterns, understanding and tackling technical challenges, proposing comprehensive solutions, and guiding junior staff. Experience working with large, complex data sets from various sources is a plus.,
Posted 1 day ago
3.0 - 6.0 years
0 Lacs
Itanagar, Arunachal Pradesh, India
On-site
Perennial Systems is seeking highly skilled and motivated AI/ML Engineers to join our growing Data & AI team. This role offers the opportunity to work at the forefront of Generative AI, NLP, and Document AI, collaborating with cross-functional teams to design, develop, and deploy AI-powered solutions that solve real-world business problems. Responsibilities Whether you are passionate about building robust, scalable AI systems or conducting research to push the boundaries of machine learning innovation, we offer an environment that fosters autonomy, technical depth, and meaningful Responsibilities : Design, build, and deploy scalable AI/ML models and systems for real-world applications. Work across Generative AI, NLP, and Document AI to deliver innovative solutions. Translate business requirements into technical specifications for AI solutions. Explore and experiment with emerging AI models, frameworks, and algorithms. Conduct performance benchmarking and fine-tuning of AI/ML models. Contribute to internal AI best practices, reusable components, and knowledge sharing. Work closely with product managers, data engineers, and software developers to integrate AI capabilities into production systems. Ensure models are production-ready, optimized for scalability, and compliant with quality and security standards. Monitor model performance post-deployment and implement enhancements. Keep up to date with advancements in AI research, tools, and Skills & Qualifications : Experience: 3 to 6 years in AI/ML engineering, applied machine learning, or related fields. Strong proficiency in Python and ML frameworks (TensorFlow, PyTorch, Hugging Face, etc. Expertise in NLP techniques, large language models (LLMs), and Generative AI frameworks. Hands-on experience with data preprocessing, feature engineering, and model evaluation. Familiarity with cloud-based AI/ML services (AWS Sagemaker, Azure ML, GCP AI Platform). Experience with deploying AI/ML models into production environments. Knowledge of MLOps tools and workflows (MLflow, Kubeflow, Docker, Kubernetes). Strong analytical and problem-solving skills. Excellent communication and collaboration abilities. Ability to work independently in a fast-paced, agile environment. Exposure to vector databases, prompt engineering, or retrieval-augmented generation (RAG). Contributions to open-source AI projects or published research in relevant fields. Experience with document understanding and OCR-based AI solutions (ref:hirist.tech)
Posted 1 day ago
5.0 years
0 Lacs
Noida, Uttar Pradesh, India
On-site
Job Description : Senior Full-Stack Engineer (MERN + Python : Noida : 5 to 10 years Responsibilities We are hiring a Senior Full-Stack Engineer with proven expertise in MERN technologies and Python backend frameworks to deliver scalable, efficient, and maintainable software solutions. You will design and build web applications and microservices, leveraging FastAPI and advanced asynchronous programming techniques to ensure high performance and Responsibilities : Develop and maintain web applications using the MERN stack alongside Python backend microservices. Build efficient and scalable APIs with Python frameworks like FastAPI and Flask, utilizing AsyncIO, multithreading, and multiprocessing for optimal performance. Lead architecture and technical decisions spanning both MERN frontend and Python microservices backend. Collaborate with UX/UI designers to create intuitive and responsive user interfaces. Mentor junior developers and conduct code reviews to ensure adherence to best practices. Manage and optimize databases such as MongoDB and PostgreSQL for application and microservices needs. Deploy, monitor, and maintain applications and microservices on AWS cloud infrastructure (EC2, Lambda, S3, RDS). Implement CI/CD pipelines to automate integration and deployment processes. Participate in Agile development practices including sprint planning and retrospectives. Ensure application scalability, security, and performance across frontend and backend systems. Design cloud-native microservices architectures focused on high availability and fault Skills and Experience : Strong hands-on experience with the MERN stack : MongoDB, Express.js, React.js, Node.js. Proven Python backend development expertise with FastAPI and Flask. Deep understanding of asynchronous programming using AsyncIO, multithreading, and multiprocessing. Experience designing and developing microservices and RESTful/GraphQL APIs. Skilled in database design and optimization for MongoDB and PostgreSQL. Familiar with AWS services such as EC2, Lambda, S3, and RDS. Experience with Git, CI/CD tools, and automated testing/deployment workflows. Ability to lead teams, mentor developers, and make key technical decisions. Strong problem-solving, debugging, and communication skills. Comfortable working in Agile environments and collaborating cross-functionally. (ref:hirist.tech)
Posted 1 day ago
5.0 years
0 Lacs
Noida, Uttar Pradesh, India
On-site
Essential Duties & Responsibilities You will be writing optimized Code in Javascript/Typescript and working with Advanced NodeJs and ReactJs frameworks. You will participate in regular design sessions, code reviews and agile ceremonies. You will work closely with the Product Owner/Manager and scrum team to help deliver high quality features inside of agreed timescales. You will Identify areas for modification or refactoring inside our code-base and champion their improvement. You will lead by example, contributing to a culture of high quality, personal ownership and customer focused execution. You will also coordinate with Clients directly on various aspects of the project lifecycle. You will be responsible for maintaining best coding practices in your team. Sometime you might be doing interviews with client before getting onboard on a new project. Requirements & Skills 5 - 7+ years of development experience including project leadership experience. Proven track record of delivering high quality, high stake projects in agile environment. Proven experience in building, mentoring and managing efficient development teams. Strong experience with Typescript and JavaScript, NodeJs, ExpressJS,. Strong experience with relational databases like Postgres or MySQL and ACID Principles. Strong experience with ReactJS, Redux Toolkit and other state management libraries. Modern source control systems (like Git, Bitbucket ). Analyzing user requirements, envisioning system features and functionality. Design, build, and maintain efficient, reusable, and reliable codes by setting expectations and features priorities throughout the development life cycle. Strong experience in Designing, extending, and implementing REST APIs. Strong Database Design Experience. Exposure of Continuous Integration / Continuous Deployment practices (DevOps). Experience with Testing Tools including Mocha, Chai, Sinon, Supertest, Enzyme, Istanbul, Selenium, Load Runner, JSLint and Cucumber. Exposure to AWS, GCP or Azure. Good expertise with server-side development using NodeJs specifically through usage of microservices. Exposure to Design patterns, Clean coding practices and SOLID design principle. Good Exposure to asynchronous programming. Good Exposure to API documentation tools like Postman or Swagger. Good exposure of Code quality management tools like linters or formatters. Good exposure to unit testing tools. Good To Have Agile, Scrum, TDD. Experience in building serverless applications using Nodejs by leveraging AWS Lambda, Azure Functions, etc. SonarQube and writing clean code best practices. Experience with Atlassian JIRA and Confluence for managing the application lifecycle. Experience with Docker and Kubernetes is a plus. (ref:hirist.tech)
Posted 1 day ago
6.0 - 10.0 years
0 Lacs
karnataka
On-site
You are a highly skilled QA Lead with 6-9 years of experience in manual, database, and automation testing, and you will be joining our dynamic team. Your primary responsibility will be to ensure the quality of our software products by designing and executing both manual and automated test cases. You should hold a Bachelor's degree in Computer Science, Engineering, or a related technical field. Your role will involve participating in requirement analysis and review to ensure testability and identify potential issues early on. You will also be responsible for developing test scenarios and test cases for the capabilities of our products and executing them during formal test periods for every release. Having experience in Functional, Smoke, Regression, System Integration, and End-to-End Testing is essential. You should also be proficient in backend API testing using tools like POSTMAN and Rest Assured IO Java-based library for all GET, PUT, POST, and DELETE methods to assert responses. Additionally, experience in web-based applications, cross-browser testing, and using the test management tool Zephyr Scale Jira App would be beneficial. Your experience working with traditional databases like SQL Server, as well as non-traditional databases such as MongoDB, PostgreSQL, and using the Snowflake cloud-based data warehousing platform is a huge plus. You should have extensive work experience in Agile teams using Scrum/Kanban models and be familiar with Atlassian tools like JIRA and Confluence. An understanding of the CI/CD process and familiarity with tools like Jenkins or Azure DevOps is required. Prior experience with any cloud platforms (e.g., Azure or AWS) and related services would be advantageous. Proficiency in Test Automation frameworks and tools such as Selenium WebDriver, JUnit, Maven, and TestNG, as well as building test scripts using the Java programming language and the Behavior-Driven Development (BDD) Cucumber framework, is necessary. In this role, you will collaborate with product managers, developers, and other stakeholders to clarify requirements and ensure high-quality deliverables.,
Posted 1 day ago
1.0 years
0 Lacs
Noida, Uttar Pradesh, India
On-site
Key Responsibilities Develop, fine-tune, and deploy Large Language Models (LLMs) for various applications, including chatbots, virtual assistants, and enterprise AI solutions. Build and optimize conversational AI solutions with at least 1 year of experience in chatbot development. Implement and experiment with LLM agent development frameworks such as LangChain, LlamaIndex, AutoGen, and LangGraph . Design and develop ML/DL-based models to enhance natural language understanding capabilities. Work on retrieval-augmented generation (RAG) and vector databases (e.g., FAISS, Pinecone, Weaviate, ChromaDB) to enhance LLM-based applications. Optimize and fine-tune transformer-based models such as GPT, LLaMA, Falcon, Mistral, Claude, etc., for domain-specific tasks. Develop and implement prompt engineering techniques and fine-tuning strategies to improve LLM performance. Work on AI agents, multi-agent systems, and tool-use optimization for real-world business applications. Develop APIs and pipelines to integrate LLMs into enterprise applications. Research and stay up-to-date with the latest advancements in LLM architectures, frameworks, and AI trends . Required Skills & Qualification 1-4 years of experience in Machine Learning (ML), Deep Learning (DL), and NLP-based model development. Hands-on experience in developing and deploying conversational AI/chatbots is Plus Strong proficiency in Python and experience with ML/DL frameworks such as TensorFlow, PyTorch, and Hugging Face Transformers . Experience with LLM agent development frameworks like LangChain, LlamaIndex, AutoGen, LangGraph . Knowledge of vector databases (e.g., FAISS, Pinecone, Weaviate, ChromaDB) and embedding models . Understanding of Prompt Engineering and Fine-tuning LLMs . Familiarity with cloud services (AWS, GCP, Azure) for deploying LLMs at scale. Experience in working with APIs, Docker, FastAPI for model deployment. Strong analytical and problem-solving skills. Ability to work independently and collaboratively in a fast-paced environment. Good To Have Experience with Multi-modal AI models (text-to-image, text-to-video, speech synthesis, etc.) . Knowledge of Knowledge Graphs and Symbolic AI . Understanding of MLOps and LLMOps for deploying scalable AI solutions. Experience in automated evaluation of LLMs and bias mitigation techniques . Research experience or published work in LLMs, NLP, or Generative AI is a plus. (ref:hirist.tech)
Posted 1 day ago
3.0 - 5.0 years
0 Lacs
Noida, Uttar Pradesh, India
On-site
Job Description At Optimum Info, we are continually innovating and developing a range of software solutions empowering the Network Development and Field Operations businesses at Automotive, Power Sports and Equipment industries. Our integrated suite of comprehensive solutions provides a seamless and rich experience to our customers, helping them become more effective at their work and create an impact on the organization. Our sharp cultural focus on outstanding customer service and employee empowerment is core to our growth and success. As a growing company, we offer incredible opportunities for learning and growth with opportunity to manage high-impact business solution. Position Overview The Infrastructure Engineer will be responsible for maintaining Optimum's server and end-user infrastructure and work on initiatives to enhance the performance, reliability, and security of assets on the Amazon cloud. The position is based in Noida, India and will collaborate with infrastructure and Infosec team members based out of Optimum's other locations (Ahmedabad, India and Los Angeles, USA). Key Responsibilities AWS Infrastructure Management : Provision, configure, and monitor cloud infrastructure on AWS, ensuring high availability, performance, and security. Server Administration : Manage and maintain Windows and Linux servers, including patching, backup, and troubleshooting. Resource Optimization : Continuously review cloud resource utilization to optimize performance and reduce costs. Monitoring & Incident Response : Set up and manage monitoring tools, respond to alerts, and troubleshoot infrastructure issues. Security & Compliance : Ensure compliance with security policies, manage SSL certificates, and support access control mechanisms. Collaboration & Automation : Work with DevOps and Security teams to implement automation, infrastructure-as-code (IaC), and best practices. Office 365 Administration : Oversee O365 services, user management, and security settings. Desired Qualifications & Experience Bachelor's degree in engineering or a related field, with 3 - 5 years of experience managing cloud infrastructure. Cloud operations certification is a plus. Hands-on experience with AWS services such as EC2, S3, IAM, VPC, and CloudWatch. Strong knowledge of Windows and Linux server administration. Experience with cloud cost optimization strategies. Familiarity with Infrastructure-as-Code tools (Terraform, CloudFormation) is a plus. Strong English communication skills and proficiency in MS Office (Word, Excel, PowerPoint). Preferred Certifications AWS Certified SysOps Administrator - Associate AWS Certified Solutions Architect - Associate (ref:hirist.tech)
Posted 1 day ago
0 years
0 Lacs
Noida, Uttar Pradesh, India
On-site
Role Overview As a Full Stack Developer, you will design, develop, and deploy end-to-end solutions using modern front-end and back-end technologies. You will be involved in the full software development lifecycle - working closely with product owners, architects, and DevOps teams to deliver robust, cloud-native applications. Key Responsibilities Design, develop, and optimize full-stack applications using Angular 14+, Java, and Spring Boot. Implement microservices architecture with a focus on scalability, resilience, and performance. Develop and integrate RESTful APIs and ensure security, versioning, and best practices. Apply Test-Driven Development (TDD) principles using frameworks like JUnit and Jasmine/Karma. Participate in Agile ceremonies (Scrum, XP) and contribute to sprint planning, code reviews, and retrospectives. Set up and maintain CI/CD pipelines using Jenkins or equivalent. Deploy applications to AWS, GCP, or Pivotal Cloud Foundry (PCF) using Kubernetes for orchestration. Manage and optimize relational databases (Oracle, PostgreSQL), ensuring efficient query design and indexing. Mentor junior developers and promote coding standards, design patterns, and best Skills : : Angular 14+, TypeScript, HTML5, CSS3, RxJS, NgRx : Java 8+, Spring Boot, Spring Security, Spring : Oracle, PostgreSQL, SQL & DevOps : AWS/GCP/PCF, Kubernetes, Docker, Jenkins, : JUnit, Mockito, Jasmine, : Agile/Scrum, TDD, Domain-Driven Design (DDD) (ref:hirist.tech)
Posted 1 day ago
10.0 - 14.0 years
0 Lacs
karnataka
On-site
A career with Point72's Technology Team involves being part of a dynamic group that is dedicated to reimagining the future of investing. The Technology group at Point72 is focused on continuously enhancing the company's IT infrastructure to stay ahead in the rapidly evolving technology landscape. By embracing enterprise agile methodology, the team of experts is committed to providing exceptional end-user experiences. Professional development is encouraged to foster innovative ideas and intellectual curiosity among team members. The Technology Infrastructure team at Point72 is responsible for engineering and operating the foundational technology platforms that support all applications and businesses within the firm. The team works with a wide range of technologies, from datacenter infrastructure to large-scale cloud services, with the goal of delivering reliable, performant, and modern technology platforms to improve time-to-market for the business. Additionally, the team provides end-user technology solutions to meet the collaboration and productivity needs of global teams. Innovation, challenging the status quo, and a collaborative working environment are key aspects of the team's culture. As a member of the Technology Team at Point72, some of your responsibilities will include leading the design and implementation of advanced automation solutions to enhance technology infrastructure operations, architecting scalable infrastructure automation systems using AWS cloud services, developing and maintaining automation systems with object-oriented programming languages, leveraging tools like Ansible and Terraform for configuration automation, collaborating with cross-functional teams to drive automation initiatives, and mentoring junior engineers to contribute to their professional development. To be successful in this role, you should have 10+ years of professional experience as a software engineer with a Bachelor's or higher degree in computer science or a related field. Strong proficiency in programming using object-oriented languages like Rust, Java, or Python, expertise with core AWS services, knowledge of networking concepts and technologies, familiarity with configuration automation tools, experience with real-time messaging systems, and strong problem-solving skills are some of the requirements for this position. Additionally, good communication skills, the ability to work collaboratively in a team environment, and a commitment to the highest ethical standards are essential qualities for candidates. Join Point72, a leading global alternative investment firm, and be part of a team that aims to deliver superior returns for investors through fundamental and systematic investing strategies. By attracting and retaining top talent, Point72 is committed to cultivating an investor-led culture and supporting the long-term growth of its people. Learn more about Point72 at https://point72.com/.,
Posted 1 day ago
9.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Description Senior Software Engineer in Test I Chennai, India The Opportunity: Anthology delivers education and technology solutions so that students can reach their full potential and learning institutions thrive. Our mission is to empower educators and institutions with meaningful innovation that’s simple and intelligent, inspiring student success and institutional growth. The Power of Together is built on having a diverse and inclusive workforce. We are committed to making diversity, inclusion, and belonging a foundational part of our hiring practices and who we are as a company. For more information about Anthology and our career opportunities, please visit www.anthology.com. At Anthology, software engineers will learn to apply their software development expertise as members of a cross-functional team. Our teams usually consist of Product Managers, UX Designers, and Developers of varying interests to create a Full Stack team. Primary responsibilities will include: Being an architect and driving the implementation of scalable, maintainable test automation frameworks and strategies across the tech stack, leveraging tools such as Selenium with Java, Playwright, Postman Collections, and JMeter for comprehensive UI and API automation. Leading cross-functional collaborations with engineering, product, and business stakeholders to translate complex business workflows and technical requirements into a cohesive test strategy that maximizes coverage and minimizes risk. Providing technical leadership in cloud-based test automation, with deep expertise in AWS services, CI/CD pipeline integration, and infrastructure-as-code practices to support scalable validation at every stage of deployment. Owning and evolving the organization’s end-to-end testing strategy, defining best practices in test planning, risk assessment, and test governance using Azure DevOps (ADO), while ensuring alignment with business goals and system design. Driving impact analysis for code changes across distributed systems and proactively enhance testing strategies to maintain system integrity and reduce regression risks. Designing and reviewing highly reusable, modular, and maintainable automated test cases that validate functionality, performance, data integrity, security, and usability across the full product surface area. Overseeing test data strategies, coordinate test artifact management, and ensure adaptability to dynamic project requirements. Owning the regression testing portfolio, continuously optimizing for coverage, stability, and execution efficiency through automation best practices and emerging tools. Serving as a technical mentor and QA thought leader within Agile SCRUM teams, championing test excellence and supporting continuous delivery of high-quality software. Leading root cause investigations for complex production issues, enforcing accountability in test coverage gaps and ensuring comprehensive traceability through the test lifecycle. Defining and enforcing quality engineering standards and processes, fostering a culture of continuous improvement, innovation, and operational excellence. Triage, managing, and communicating defects within ADO, driving swift issue resolution through close collaboration with development teams. Spearheading QA process improvements across teams, identifying systemic inefficiencies and leading initiatives to elevate testing maturity and engineering productivity. The Candidate: Required skills/qualifications: Bachelor’s degree in Computer Science, Computer Engineering, or a related technical field—or equivalent industry experience. 9+ years of progressive experience in software quality engineering, including proven leadership in automation strategy, test architecture, and cross-team initiatives. Expertise in designing and implementing robust automation solutions using: Playwright (JavaScript or TypeScript) Selenium with Java (BDD) Postman for comprehensive API validation JMeter for load and performance testing Deep understanding of SDLC/STLC, test pyramids, and QA best practices across diverse application architectures. Demonstrated ability to lead large-scale test initiatives and contribute to test infrastructure improvements with an engineering mindset. Strong analytical and debugging skills, with the ability to quickly assess issues across systems and guide teams toward resolution. Experience with Microsoft Visual Studio Test Professional and Azure DevOps for test case management, test plans, and reporting. Excellent communication skills with the ability to advocate for quality across both technical and non-technical stakeholders. High initiative, ownership mentality, and a commitment to driving results through collaboration and mentorship. Preferred skills/qualifications: In-depth knowledge of AWS Connect and broader AWS services. Experience defining performance benchmarks and executing advanced performance tests (load, stress, and endurance). Familiarity with CRM systems and Student Information Systems (SIS). Advanced understanding of Agile and DevOps principles, with a record of hands-on leadership in SCRUM environments. This job description is not designed to contain a comprehensive listing of activities, duties, or responsibilities that are required. Nothing in this job description restricts management's right to assign or reassign duties and responsibilities at any time. Anthology is an equal employment opportunity/affirmative action employer and considers qualified applicants for employment without regard to race, gender, age, color, religion, national origin, marital status, disability, sexual orientation, gender identity/expression, protected military/veteran status, or any other legally protected factor.
Posted 1 day ago
5.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Who we are? Who We Are: At Inchcape Shipping Services, our vision is to create a connected world where customers can trade successfully and make informed decisions in every port, everywhere. We achieve this by combining our worldwide infrastructure with local expertise, through our global network of more than 250 proprietary offices and a team of over 3,000 dedicated professionals. Our diverse customer base includes owners and charterers in the oil, cruise, container, and bulk commodity sectors, as well as naval, government, and intergovernmental organisations. We have an ambitious growth model, and a career here is certainly going to be a rewarding one that will allow you to bring your skills & experience. We embrace change and are open to new thinking and pushing for positive change in our industry. Job Purpose: Plan and deliver business-critical product development. Ensure that the development team follows the Scrum framework and agile practices. Mentor and motivate the team to improve processes, facilitate meetings/ceremonies, and decision-making processes. Eliminate team impediments. Duties and Responsibilities: Use agile methodology values, principles, and practices to plan, manage, and deliver solutions. Mentor and support the scrum team to follow agile values, principles, and practices. Collaborate with the Product Manager/Product Owner to build high-functioning development teams Determine and manage tasks, issues, risks, and action items. In collaboration with the Product Manager/Product Owner, ensure there is a prioritized backlog of user stories. Facilitate a collaborative planning and estimating process. In collaboration with the Product Manager/Product Owner, prioritise, refine, and include L3 tickets in the sprint backlog. Schedule and facilitate scrum ceremonies (e.g., daily stand-ups, retro,s etc) and decision-making processes. Monitor progress and performance using appropriate metrics and help the team make improvements. Champion continuous improvement. Plan and organize demos. Ensure the proper use of collaborative processes and remove impediments for the scrum team. Collaborate with other Scrum Masters to understand and track dependencies between teams. Collaborate with other Scrum Masters to improve the agile practice and ways of working Support the Product Manager/Product Owner by preparing and presenting status reports to stakeholders Knowledge Relevant degree in information technology or related field Ability to lead, influence, and work with people belonging to different cultures. Plan and manage Product delivery Knowledge of full-stack software engineering and modern delivery practices Thorough knowledge of the entire software development life cycle with experience in delivering critical product/projects Knowledge of application integrations and API Strong analytical and conceptual ability to function at both the detail and conceptual levels Skills Certified scrum master Project management practices Agile delivery practices Excellent team management skills Excellent communication skills Strong collaborative and partner approach Jira & Confluence Must have come from the software development background using .NET technologies Understanding of AWS cloud native development using EC2, Lambda, S3, Simple Que Service etc., Experience with any middleware platform and Database technologies will be an added advantage. Experience 5+ years of agile delivery management experience Ability to manage scrum teams of experienced full-stack engineers Management if teams that built and maintained business-critical products/applications Exposure to Business-critical systems integrations & RPA Experience in handling DevOps & site reliability engineering teams will be an added advantage Plus, much more! Why Inchcape Shipping Services? We believe in building a diverse and high-performing workforce, that works together to provide our customers with the exceptional service they deserve. To reach the highest standards we depend on our people, their welfare, training, and expertise. We realise the value of our staff and know that your unique experiences, skills, and passions will help you to build a rich and rewarding career in our dynamic industry. Our values are at the centre of everything we do, and the successful candidate will be expected to demonstrate and fully adopt these: Global Perspective - we connect the world and see the bigger picture. The Power of People - we rely on the strength of local agent knowledge and relationships. Progress - we adopt new thinking and push for positive change in our industry. #WeAreInchcape Inchcape is an Equal Opportunities Employer - equality, diversity, and inclusion are at the heart of everything we do. Working in a diverse society, we recognise that our customers, colleagues, and contractors are central to our success. Additional Information: Appointment to this role will be subject to satisfactory references and possession of a valid Right to Work documentation depending upon your geographical location. To protect the interests of all parties, Inchcape will not accept unsolicited or speculative resumes from recruitment agencies and will not be responsible for any fees associated with them.
Posted 1 day ago
1.0 - 2.0 years
0 Lacs
Mumbai Metropolitan Region
On-site
Job Description At RBHU, we are committed to delivering data-driven solutions that empower decision-making. We work at the intersection of Data, AI, and Analytics, helping clients unlock the value of their data. We are now looking for an enthusiastic Data Engineer with hands-on Databricks experience to join our growing team. Key Responsibilities Develop, maintain, and optimize ETL/ELT pipelines using Databricks. Integrate data from multiple sources into scalable and reliable data solutions. Implement data transformations, cleansing, and enrichment processes. Work closely with data analysts, data scientists, and business teams to deliver analytics-ready datasets. Ensure data quality, performance, and security in all workflows. Automate workflows and optimize existing pipelines for efficiency. Maintain technical documentation for all data engineering processes. Required Skills & Qualifications 1-2 years of professional experience as a Data Engineer. Strong hands-on experience with Databricks (including PySpark/Spark SQL). Solid understanding of data modeling, data warehousing, and ETL processes. Good knowledge of SQL and performance tuning techniques. Exposure to cloud platforms (Azure / AWS / GCP) - preferably Azure Databricks. Basic understanding of version control tools like Git. Good problem-solving skills and ability to work in collaborative teams. Preferred Skills Experience with Delta Lake and Unity Catalog. Understanding of data governance and compliance requirements. Familiarity with workflow orchestration tools (Airflow, Azure Data Factory, etc.). Educational Qualification Bachelor's degree in Computer Science, Information Technology, or related field. (ref:hirist.tech)
Posted 1 day ago
5.0 - 9.0 years
0 Lacs
thiruvananthapuram, kerala
On-site
We are seeking a seasoned Lead Data Engineer with over 7+ years of experience, primarily focused on Python and writing complex SQL queries in PostgreSQL. The ideal candidate will have a strong background in Python scripting, with additional knowledge of AWS services such as Lambda, Step Functions, and other data engineering tools. Experience in integrating data into Salesforce is a plus. The role requires a skilled Lead Data Engineer with extensive experience in developing and managing complex data solutions using PostgreSQL and Python. The candidate should have the ability to work as an individual contributor as well as the lead of a team, delivering end-to-end data solutions. It is essential to troubleshoot and resolve data-related issues efficiently, ensure data security, and compliance with relevant regulations. Additionally, the Lead Data Engineer should be capable of leading a small team (2-3 data engineers) and maintaining comprehensive documentation of data solutions, workflows, and processes. Proactively communicating with the customer and team to provide guidance aligned with both short-term and long-term customer needs is crucial. The ideal candidate should have experience working on at least 3 to 4 end-to-end data engineering projects and adhere to company policies, procedures, and relevant regulations in all duties. The candidate should have deep expertise in writing and optimizing complex SQL queries in PostgreSQL and proficiency in Python scripting for data manipulation and automation. Familiarity with AWS services like Lambda, Step Functions, knowledge on building semantic data layers between applications and backend databases, and experience in integrating data into Salesforce are valuable skills. Strong troubleshooting and debugging skills, the ability to build and test data processes integrating with external applications using REST API and SOAP calls, knowledge of real-time data processing and integration techniques, and strong communication skills to interact with customers and team members effectively are also essential. Adhering to the Information Security Management policies and procedures is required.,
Posted 1 day ago
4.0 - 6.0 years
0 Lacs
Delhi, India
On-site
We are seeking a skilled Backend Developer with experience in AI tools and data management to join our team and build robust, scalable, and intelligent backend systems. You will be responsible for designing, developing, and maintaining APIs, data pipelines, and AI-driven services. Responsibilities API Development : Design, develop, and maintain RESTful and/or GraphQL APIs for seamless data exchange. AI Integration : Integrate AI/ML models and services into backend systems to enhance application functionality. System Architecture : Design and implement scalable and reliable backend architectures. Performance Optimization : Identify and resolve performance bottlenecks to ensure optimal system performance. Code Quality : Write clean, well-structured, and maintainable code adhering to best practices. Database Management : Design and optimize database schemas and queries. Cloud Services : Utilize cloud platforms (e.g., AWS, GCP, Azure) for deployment and management of backend services. Collaboration : Work closely with data scientists, front-end developers, and product managers to deliver integrated solutions. Requirements 4-6 years of backend development experience, with a strong understanding of microservices architecture. Btech/Mtech/MS in Computer Science or related field. Proficiency in Go or Java. Experience with relational/NoSQL databases, data pipelines, and Git. Proficiency in containerization (Docker, Kubernetes) and caching mechanisms (Redis/Memcached) Basic Python knowledge. Strong problem-solving and communication skills. Preferred Qualifications Experience with AI/ML frameworks (e.g., Langchain, LangGraph, LLTensorFlow), cloud-based AI services, and message queues (e.g., Kafka, RabbitMQ). Knowledge of data warehousing/ETL. Experience with performance monitoring tools and serverless architectures. Company Profile Inshorts Group is a leading tech company in the short form content space. Our innovative platforms Inshorts and Public have been downloaded by more than 300 million users. Inshorts, our flagship product,is India's highest-rated and #1 short news app, serving over 12 million active users in India with concise 60 word shorts tailored to smartphone users wanting to get updated of news quickly. Public, our second platform is the largest platform for hyperlocal content in India, with 70 million active users in India, providing timely updates and information relevant to the users towns and cities. We also provide cutting-edge and bespoke advertisement olutions for brands. Brands continue to trust us year after year owing to the multiple innovative award-winning campaigns we have delivered for them across sectors and seasons. This opportunity is for Inshorts Pte Ltd. (ref:hirist.tech)
Posted 1 day ago
5.0 - 9.0 years
0 Lacs
haryana
On-site
ZS is a place where passion changes lives. As a management consulting and technology firm focused on improving life and how we live it, our most valuable asset is our people. Here you'll work side-by-side with a powerful collective of thinkers and experts shaping life-changing solutions for patients, caregivers, and consumers, worldwide. ZSers drive impact by bringing a client-first mentality to each and every engagement. We partner collaboratively with our clients to develop custom solutions and technology products that create value and deliver company results across critical areas of their business. Bring your curiosity for learning; bold ideas; courage and passion to drive life-changing impact to ZS. At ZS, we honor the visible and invisible elements of our identities, personal experiences, and belief systemsthe ones that comprise us as individuals, shape who we are, and make us unique. We believe your personal interests, identities, and desire to learn are part of your success here. Learn more about our diversity, equity, and inclusion efforts and the networks ZS supports to assist our ZSers in cultivating community spaces, obtaining the resources they need to thrive, and sharing the messages they are passionate about. **What You'll Do:** Lead end-to-end projects using cloud technologies to solve complex business problems. Provide technology expertise to maximize value for clients and project teams. Drive a strong delivery methodology to ensure projects are delivered on time, within budget, and to clients" satisfaction. Ensure technology solutions are scalable, resilient, and optimized for performance and cost. Guide, coach, and mentor project team members for continuous learning and professional growth. Demonstrate expertise, facilitation, and strong interpersonal skills in internal and client interactions. Collaborate with ZS experts to drive innovation and minimize project risks. Work globally with team members to ensure a smooth project delivery. Bring structure to unstructured work for developing business cases with clients. Assist ZS Leadership with business case development, innovation, thought leadership, and team initiatives. **What You'll Bring:** Candidates must either be in their junior year of a Bachelor's degree or in their first year of a Master's degree specializing in Business Analytics, Computer Science, MIS, MBA, or a related field with academic excellence. 5+ years of consulting experience in leading large-scale technology implementations. Strong communication skills to convey technical concepts to diverse audiences. Significant supervisory, coaching, and hands-on project management skills. Extensive experience with major cloud platforms like AWS, Azure, and GCP. Deep knowledge of enterprise data management, advanced analytics, process automation, and application development. Familiarity with industry-standard products and platforms such as Snowflake, Databricks, Redshift, Salesforce, Power BI, Cloud. Experience in delivering projects using agile methodologies. **Additional Skills:** Capable of managing a virtual global team for the timely delivery of multiple projects. Experienced in analyzing and troubleshooting interactions between databases, operating systems, and applications. Travel to global offices as required to collaborate with clients and internal project teams. **Perks & Benefits:** ZS offers a comprehensive total rewards package including health and well-being, financial planning, annual leave, personal growth, and professional development. Our robust skills development programs, multiple career progression options, internal mobility paths, and collaborative culture empower you to thrive as an individual and global team member. We are committed to giving our employees a flexible and connected way of working. A flexible and connected ZS allows us to combine work from home and on-site presence at clients/ZS offices for the majority of our week. The magic of ZS culture and innovation thrives in both planned and spontaneous face-to-face connections. **Travel:** Travel is a requirement at ZS for client-facing ZSers; business needs of your project and client are the priority. While some projects may be local, all client-facing ZSers should be prepared to travel as needed. Travel provides opportunities to strengthen client relationships, gain diverse experiences, and enhance professional growth by working in different environments and cultures. **Considering Applying ** At ZS, we're building a diverse and inclusive company where people bring their passions to inspire life-changing impact and deliver better outcomes for all. We are most interested in finding the best candidate for the job and recognize the value that candidates with all backgrounds, including non-traditional ones, bring. If you are interested in joining us, we encourage you to apply even if you don't meet 100% of the requirements listed above. ZS is an equal opportunity employer and is committed to providing equal employment and advancement opportunities without regard to any class protected by applicable law. **To Complete Your Application:** Candidates must possess or be able to obtain work authorization for their intended country of employment. An online application, including a full set of transcripts (official or unofficial), is required to be considered. NO AGENCY CALLS, PLEASE. Find Out More At: www.zs.com,
Posted 1 day ago
3.0 years
0 Lacs
Greater Kolkata Area
On-site
About The Role We are seeking a skilled Data Engineer ETL to design, develop, and maintain robust ETL pipelines that enable seamless data integration and processing across multiple data sources. You will play a critical role in transforming raw data into actionable insights by ensuring efficient, scalable, and reliable data flows to support analytics, reporting, and machine learning initiatives. Key Responsibilities Design, build, and optimize scalable ETL pipelines to extract, transform, and load data from various structured and unstructured data sources. Work closely with data scientists, analysts, and business stakeholders to understand data requirements and deliver reliable data solutions. Develop data workflows and automation using ETL tools (e.g., Apache NiFi, Talend, Informatica) or custom scripts (Python, SQL, Shell). Monitor and troubleshoot ETL jobs to ensure high data quality and timely delivery. Implement data validation, error handling, and logging to ensure data accuracy and integrity. Optimize data storage and retrieval through database tuning, partitioning, and indexing strategies. Collaborate with cloud engineers and infrastructure teams to manage data pipelines in cloud environments (AWS, Azure, GCP). Document ETL processes, data lineage, and architecture to maintain knowledge sharing and compliance. Stay current with new data engineering technologies and best practices to improve system performance and reliability. Qualifications Bachelors degree in Computer Science, Information Systems, or related field. 3+ years of experience in data engineering with a focus on ETL development. Proficiency in SQL and experience with relational databases (e.g., MySQL, PostgreSQL, Oracle). Hands-on experience with ETL tools and frameworks (Apache Airflow, Talend, Informatica, AWS Glue, etc.). Strong programming skills in Python, Java, or Scala. Familiarity with big data technologies such as Hadoop, Spark, Kafka is a plus. Experience working with cloud data platforms (AWS Redshift, Google BigQuery, Azure Synapse). Knowledge of data warehousing concepts and dimensional modeling. Strong analytical and problem-solving skills. Ability to work collaboratively in an agile team environment (ref:hirist.tech)
Posted 1 day ago
6.0 years
0 Lacs
Greater Kolkata Area
On-site
Lead Software Engineer (Backend) Credit Cards Platform What You'll Do Perform complex application programming activities with an emphasis on mobile development : Angular, Node, TypeScript, JavaScript, RESTful APIs and more. Lead the definition of system architecture and detailed solution design that are scalable and extensible. Collaborate with Product Owners, Designers, and other engineers on different permutations to find the best solution possible. Own the quality of code and do your own testing. Automate feature testing and contribute UI testing framework. Become a subject matter expert for our mobile applications backend and middleware. Deliver amazing solutions to production that knock everyones socks off. Mentor junior developers on the team. What Were Looking For Amazing technical instincts. You know how to evaluate and choose the right technology and approach for the job. You have stories you could share about what problem you thought you were solving at first, but through testing and iteration, came to solve a much bigger and better problem that resulted in positive outcomes all-around. A love for learning. Technology is continually evolving around us, and you want to keep up to date to ensure we are using the right tech at the right time. A love for working in ambiguityand making sense of it. You can take in a lot of disparate information and find common themes, recommend clear paths forward and iterate along the way. You dont form an opinion and sell it as if its gospel; this is all about being flexible, agile, dependable, and responsive in the face of many moving parts. Confidence, not ego. You have an ability to collaborate with others and see all sides of the coin to come to the best solution for everyone. Flexible and willing to accept change in priorities, as necessary. Demonstrable passion for technology (e.g., personal projects, open-source involvement). Enthusiastic embrace of DevOps culture and collaborative software engineering. Ability and desire to work in a dynamic, fast paced, and agile team environment. Enthusiasm for cloud computing platforms such as AWS or Azure. Basic Qualifications Minimum B. / M. Computer Science or related discipline from accredited college or University. At least 6 years of experience designing, developing, and delivering backend applications with Node.js, TypeScript, JavaScript, Restful APIs and related backend frameworks. At least 2 years of experience building internet facing services. At least 2 years of experience with AWS and/or OpenShift. Proficient in following concepts : object-oriented programming, software engineering techniques, quality engineering, parallel programming, databases, etc. Proficient in building and consuming RESTful APIs. Proficient in managing multiple tasks and consistently meet established timelines. Experience integrating APIs with front-end and/or mobile-specific frameworks. Strong collaboration skills. Excellent written and verbal communications skills. Preferred Qualifications Experience with Apache Cordova framework. Demonstrable knowledge of native coding background in iOS, Android. Experience developing and deploying applications within Kubernetes based containers. Experience in Agile and SCRUM development techniques. (ref:hirist.tech)
Posted 1 day ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
40175 Jobs | Dublin
Wipro
19626 Jobs | Bengaluru
Accenture in India
17497 Jobs | Dublin 2
EY
16057 Jobs | London
Uplers
11768 Jobs | Ahmedabad
Amazon
10704 Jobs | Seattle,WA
Oracle
9513 Jobs | Redwood City
IBM
9439 Jobs | Armonk
Bajaj Finserv
9311 Jobs |
Accenture services Pvt Ltd
8745 Jobs |