Jobs
Interviews

9211 Logging Jobs - Page 33

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

4.0 - 8.0 years

0 Lacs

ahmedabad, gujarat

On-site

YipitData is the leading market research and analytics firm for the disruptive economy, having recently raised up to $475M from The Carlyle Group at a valuation over $1B. For three years and counting, YipitData has been recognized as one of Inc's Best Workplaces. As a fast-growing technology company with offices in various locations including NYC, Austin, Miami, and more, we cultivate a people-centric culture focused on mastery, ownership, and transparency. As a Web Crawling Specialist [Official, Internal Title: Data Solutions Engineer] at YipitData, you will play a pivotal role in designing, refactoring, and maintaining web scrapers that power critical reports across the organization. Reporting directly to the Data Solutions Engineering Manager, your contributions will ensure that data ingestion processes are resilient, efficient, and scalable, directly supporting multiple business units and products. In this role, you will be responsible for overhauling existing scraping scripts to improve reliability, maintainability, and efficiency. You will implement best coding practices to ensure quality and sustainability. Additionally, you will utilize sophisticated fingerprinting methods to avoid detection and blocking, handle dynamic content, navigate complex DOM structures, and manage session/cookie lifecycles effectively. Collaborating with cross-functional teams, you will work closely with analysts and stakeholders to gather requirements, align on targets, and ensure data quality. You will provide support to internal users of web scraping tooling by offering troubleshooting, documentation, and best practices to ensure efficient data usage for critical reporting. Your responsibilities will also include developing monitoring solutions, alerting frameworks to identify and address failures, evaluating scraper performance, diagnosing bottlenecks, and scaling issues. You will propose new tooling, methodologies, and technologies to enhance scraping capabilities and processes, staying up to date with industry trends and evolving bot-detection tactics. This fully-remote opportunity based in India offers standard work hours from 11am to 8pm IST with flexibility. Effective communication in English, 4+ years of experience with web scraping frameworks, a strong understanding of HTTP, RESTful APIs, HTML parsing, browser rendering, and TLS/SSL mechanics, expertise in advanced fingerprinting and evasion strategies, and troubleshooting skills are key to succeeding in this role. At YipitData, we offer a comprehensive compensation package, including benefits, perks, and a competitive salary. We prioritize your personal life with offerings such as vacation time, parental leave, team events, and learning reimbursement. Your growth at YipitData is determined by the impact you make, fostering an environment focused on ownership, respect, and trust.,

Posted 6 days ago

Apply

7.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Location:, Hyderabad, Work Model: Hybrid (3 days from office) Experience Required: 7+ years Role Type: Individual Contributor 6 Months of Contract position Role Summary We are seeking a QA Automation Engineer with 7+ years of hands-on experience in automation testing using Selenium WebDriver (Java), backend validation via SQL, and API testing through Postman. The ideal candidate will contribute to enterprise-scale automation suites, validate data across services, and collaborate in Agile delivery teams. This role emphasizes deep technical ability in writing, debugging, and managing test scripts, with a structured approach to test design and defect triaging. Candidates must demonstrate working knowledge of test frameworks like JUnit/TestNG, dependency tools like Maven, and collaboration platforms such as Git and JIRA. Must-Have Skills (with Required Depth) Skill Skill Depth Selenium WebDriver (Java) Must have independently designed and implemented automation test cases for complex, dynamic UIs. Candidate must demonstrate ability to build reusable page-object components, implement synchronization strategies using explicit waits, and handle DOM-level exceptions. SQL – Backend Validation Should be proficient in writing mid-to-complex queries using joins, aggregations, and subqueries to validate multi-table relationships. Must be able to debug data mismatches directly against Oracle/MySQL/SQL Server. API Testing – Postman Should have performed REST API validations using Postman. Must be able to test endpoints by setting headers/auth tokens, validate status codes, and assert payload structures (JSON/XML). Full automation of API suites is not required. JUnit / TestNG Must have independently managed test execution using annotations (@BeforeClass, @DataProvider), defined test groups, configured retries, and asserted results across functional modules. Maven / Gradle Should be capable of managing automation test suites via Maven — including configuring dependencies in pom.xml, executing test lifecycles (mvn test), and interpreting console output. BDD (Cucumber) Must have authored Gherkin-based feature files and collaborated with business analysts for scenario design. Step definition coding is not mandatory, but knowledge of how feature files plug into test execution is required. JIRA Should be proficient in documenting test cases, logging bugs, linking defects to epics/stories, and updating Agile boards. Git / GitHub Must be able to manage code via Git: branch creation, rebasing, conflict resolution, and using pull requests. Expected to demonstrate fluency in working with shared repositories. Agile/Scrum Must have worked within structured sprints, participated in ceremonies (stand-ups, retros, grooming), and contributed toward QA sprint goals independently. Nice-to-Have Skills Skill Skill Depth REST Assured (Java) Familiarity with automating API calls using REST Assured is a plus. Should know how to configure base URI, handle authentication tokens, and parse JSON response data. Not mandatory if Postman is well understood. Step Definitions (BDD) Prior experience writing Java-based step definitions using Cucumber-JUnit integration is desirable but not mandatory. CI/CD – Jenkins, GitHub Actions Should be aware of triggering builds, configuring jobs to run automated tests, and interpreting build logs. Ownership of pipeline setup is not required. Test Reporting – ExtentReports, Allure Exposure to integrating test reports into frameworks and customizing test logs into HTML/dashboard outputs is preferred. Cross-Browser Testing Should understand browser compatibility strategies. Experience running tests via Selenium Grid or services like BrowserStack is a bonus. Database Connectivity – JDBC Basic understanding of establishing JDBC connections to query data from within test automation scripts. Not a required component for this role.

Posted 6 days ago

Apply

11.0 - 15.0 years

0 Lacs

hyderabad, telangana

On-site

About Evernorth: Evernorth Health Services, a division of The Cigna Group (NYSE: CI), is dedicated to creating pharmacy, care, and benefits solutions to enhance health and vitality. The organization focuses on relentless innovation to enhance the prediction, prevention, and treatment of illnesses and diseases, making these solutions more accessible to millions of people. Information Protection Manager Position Summary: As an Information Protection Manager, you will lead a team of 5-10 cybersecurity experts responsible for conducting application and infrastructure security assessments. Your primary objective will be to ensure the confidentiality, integrity, and availability of systems across various technology platforms such as Mainframe, Mac, Windows, Linux, Cloud (AWS, Azure, Google), and more. Additionally, you will collaborate with IT and business partners to address security issues identified through security evaluations and secure scanning reports. Job Description & Responsibilities: Join us at an exciting time as we enhance our security program to align with the needs of an Agile IT workforce and enhance Cigna's security posture. This role demands strong leadership, people management, teamwork, and technical skills. Key responsibilities include: - Assessing the design and implementation of cybersecurity controls in line with Cigna's Policies, Standards, and Baselines. - Overseeing security evaluations to ensure the security and compliance of technology assets. - Conducting risk assessments of existing and new services and technologies, identifying design gaps, risks, and recommending security enhancements. - Acting as an information security expert and advisor to IT and business partners for informed risk management decisions. - Identifying opportunities to enhance risk posture, proposing solutions to mitigate risks, and assessing residual risk. - Building strong relationships with individuals and groups involved in managing information risks. - Staying updated on current and emerging security threats and designing security architectures to counter them. - Collaborating with the enterprise to evaluate security solutions aligned with business and technology needs. - Communicating risk assessment findings to information security and business partners. - Developing strong relationships with IT leaders and driving improvements within the function through automation, process enhancements, and other initiatives to enhance customer experience and function effectiveness. - Effectively communicating project status to senior management and contributing to talent attraction, retention, and development. Experience Required: - Eleven to thirteen years of relevant work experience. - Previous management experience. - Comprehensive understanding of Application, Infrastructure, Network, and Cloud security. Experience Desired: - Preferably, experience in the Health Insurance or Health Care Industry. Education and Training Required: - BS or MA in Computer Science, Information Security, or a related field or equivalent work experience. - Relevant certifications such as CISSP, CISA, CCSP, CISM, CRISC, Security+, Network+, etc. Primary Skills: - Proficiency in information security management frameworks and regulatory compliance. - Strong communication skills to articulate cyber risks, controls, and solutions effectively. - Focus on continuous improvement and challenging the status quo. - Ability to assess risks and communicate findings to drive objective decisions. Working knowledge or understanding of various technologies/protocols/methodologies: - Physical and Virtual Infrastructure - Network Security - Cloud Computing - Containerization - Infrastructure as Code - Microservices - Mobile Security - Encryption and Key Management - Multifactor Authentication - Secure Software Development Lifecycle (Note: The above job description is based on the provided details and may require further customization as per the specific job requirements.),

Posted 6 days ago

Apply

0 years

0 Lacs

Chennai, Tamil Nadu, India

Remote

When you join Verizon You want more out of a career. A place to share your ideas freely — even if they’re daring or different. Where the true you can learn, grow, and thrive. At Verizon, we power and empower how people live, work and play by connecting them to what brings them joy. We do what we love — driving innovation, creativity, and impact in the world. Our V Team is a community of people who anticipate, lead, and believe that listening is where learning begins. In crisis and in celebration, we come together — lifting our communities and building trust in how we show up, everywhere & always. Want in? Join the #VTeamLife. What You’ll Be Doing... As a Senior Manager-Cloud Security in the Verizon Cyber Services, you’ll be responsible for ensuring that the data and processes that are used in public cloud platforms are secured and controlled so that application workloads in those cloud platforms are not exposed to unintended users or services. You will also be responsible for partnering with multiple stakeholders in framing and implementing security control frameworks for Cloud platforms AWS, GCP, AZURE and OCI. This position requires a highly motivated individual with a solid technical and analytics background and leadership capabilities, interfacing across the org and cross-functional teams to deliver programs. What We’re Looking For... Driving Cloud Security Strategy Planning & Execution: Drive efforts to strategize, plan and implement the short term and long term goals of building a robust, repeat-at-scale, efficient model for the multi cloud platform that will encompass standard north star architecture across domains/systems. It will also include SRE, DevSecOps, Security, Stability, Scalability and Availability tools and processes in place. Enable automation and proactive monitoring to detect issues and communicate to business proactively. Partnering with the product and business teams to align on priorities and onboarding their cloud infra into CISO security controls. Being responsible for delivery of security controls on aws, gcp, azure and oci cloud platforms across gts and product teams. Being able to communicate effectively with our customers to help them understand security issues and solutions as well as continuous delivery/Cloud concepts. Work closely with portfolio and product teams to build security, reliability, and scalability into the development lifecycle. Leading a team to build security automation tools to streamline and scale applications in the production environment and troubleshoot and resolve issues related to security compliance, deployment and operations Building reliable infrastructure services in Security Tooling to deliver highly available and scalable services. Use native Cloud infrastructure services such as EC2, EBS, Auto Scaling, Cloudwatch, etc. Looking continuously to automate and operationalize the manual / repetitive tasks. Architecting, designing and helping the team on automating cloud security controls and monitoring solutions Providing leadership with advanced capabilities to enable automation/integration across hybrid processing environments (LDAP, SSO, CI/CD, Cloud APIs, Messaging, Web, microservices, SAAS, ServiceNow, Networking...) Conducting POCs on services from security and risk stand points and create access management framework based on principle of least privileges Working with stakeholders from both applications as well as other cloud core teams to provide solutions that meet security and governance requirements while minimizing impact on developer productivity. Designing proactive monitoring, logging, audits and automated policy enforcement for security and cost compliance. Leading US and VZI development teams in a global delivery model setting to plan and deliver projects with aggressive deadlines. Providing technical leadership and subject matter expertise on large, highly complex projects. Evaluating, developing, and implementing scalable solutions to deliver business requirements. Enabling best in class developer ecosystem with needed access, data, stakeholder partnerships, work-life balance and employee long term career paths. Creating People & Tech Leadership Pipeline: Nurture development of talent into strategic roles both on technology and strategic management. Driving a Culture of Innovation: Champion a culture of innovation and drive as an example. Encouraging the team to participate in hackathons, coding events, and other org wide events and efforts. Motivating and training direct reports to maximize productivity. Coaching and mentoring team members to achieve assigned goals and objectives. You’ll Need To Have Bachelor’s degree or six or more years of work experience. Eight or more years of relevant work experience Experience in managing large scale cloud security infra projects from scratch, and handled requirements, design and the deliveries Strong people leader and a mentor, and maintains a very high level of engagement with the team members. Strong verbal and written communication skills Experience with multi cloud platform infrastructures in Azure, AWS, GCP and or OCI. Experience on Cloud Security & Governance practices and frameworks. Experience with modern source control repositories (e. g. Git) and DevOps toolsets (Jenkins/ Ansible etc.) and knowledge of Agile/ Scrum methodologies. Even better if you have one or more of the following: Experience in technology leadership, architecture, and Agile methodologies. Communication and stakeholder management skills. Problem solving skills to develop quick yet sound solutions to resolve complex issues. Experience in logging platforms in cloud infrastructure Experience in driving cloud security automation, delivering high performing and scalable applications. Experience with DevOps CI/CD processes to automate build and deployments. Experience mentoring and coaching diverse teams. Experience in identifying and analyzing and finding RCA for Errors and working with cross functional teams like SRE and backend team Experience with analytical tools and DBs like Postgres, Looker , ELK, ETL tools Recognizing, tracking, and communicating issues, accomplishments, milestones to the team and business partners. Good presentation and communication skills. If Verizon and this role sound like a fit for you, we encourage you to apply even if you don’t meet every “even better” qualification listed above. Where you’ll be working In this hybrid role, you'll have a defined work location that includes work from home and assigned office days set by your manager. Scheduled Weekly Hours 40 Equal Employment Opportunity Verizon is an equal opportunity employer. We evaluate qualified applicants without regard to race, gender, disability or any other legally protected characteristics.

Posted 6 days ago

Apply

5.0 years

3 - 9 Lacs

Delhi, Delhi

On-site

Job Title: Telecom Development Engineer – FreeSWITCH & Kazoo Department: Engineering / VoIP Platform Location: On-Site Delhi Employment Type: Full-time Experience Level: 5+ years in VoIP/Telecom Development Role Summary: We are seeking a highly skilled Telecom Development Engineer with hands-on experience in FreeSWITCH and Kazoo , alongside strong programming skills in Go , Python , and familiarity with Cloud Databases , RabbitMQ , REST APIs , Ansible , Prometheus , Grafana , and Git The ideal candidate will be responsible for developing and maintaining VoIP applications and modules in FreeSWITCH and integrating them into the Kazoo multi-tenant telephony platform using Monster UI. Key Responsibilities: Design and Develop Custom FreeSWITCH Modules: Create scalable, high-performance modules and dialplans in FreeSWITCH using Lua, Go, or C. Work with ESL (Event Socket Library) and mod_xml_curl to extend call handling logic Kazoo Integration and Configuration: Deploy FreeSWITCH modules and services into Kazoo via Monster UI and Kazoo APIs. Customize and extend Kazoo applications using Kazoo’s AMQP and REST API interfaces. Application Development: Build automation tools and microservices using Go and Python to manage telecom workflows. Develop backend services that interface with SIP, RTP, and Kazoo/FreeSWITCH subsystems. Infrastructure Automation & Monitoring: Automate deployments with Ansible . Monitor system health using Prometheus and Grafana . Implement scalable logging, alerting, and system health-checks. DevOps & Source Control: Use Git for version control and CI/CD workflows. Collaborate on code reviews and participate in agile sprints. API Integration: Consume and expose RESTful APIs to support user interface functionality and backend logic. Integrate with third-party systems and internal services using RabbitMQ message queues. Troubleshooting and Optimization: Investigate and resolve SIP signaling issues, one-way audio, NAT traversal, and codec mismatches. Optimize RTP stream handling, failover, load balancing, and call quality. Required Skills & Qualifications: VoIP Expertise: Deep understanding of SIP, RTP, SDP, NAT , and SIP tracing tools (e.g., sngrep, Wireshark). Experience building and maintaining VoIP platforms using FreeSWITCH and Kazoo . Programming Languages: Proficiency in Go (Golang) and Python . Familiarity with Lua scripting and C for FreeSWITCH module development. Messaging & Databases: Experience with RabbitMQ (AMQP) and Cloud DBs like CouchDB/Couchbase (used by Kazoo). Infrastructure Tools: Strong skills in Ansible , Git , and CI/CD pipelines. Proficient in Prometheus and Grafana for system observability. Web & API Skills: Proficient in designing and consuming RESTful APIs . Experience with Kazoo REST APIs and Monster UI for provisioning and monitoring. Preferred Qualifications: Experience working in multi-tenant VoIP platforms . Familiarity with WebRTC , STUN/TURN, and SBCs (Session Border Controllers). Previous contributions to open-source VoIP projects. Knowledge of Docker or containerization for telecom applications. Key Attributes: Strong problem-solving skills and ability to work independently. Excellent communication and documentation skills. Passion for scalable systems, performance optimization, and clean architecture. Collaborative mindset and proactive in a team environment. Job Types: Full-time, Permanent Pay: ₹311,015.97 - ₹900,000.00 per year Benefits: Cell phone reimbursement Internet reimbursement Paid time off Work Location: In person Expected Start Date: 11/08/2025

Posted 6 days ago

Apply

0 years

0 Lacs

Hyderabad, Telangana, India

Remote

When you join Verizon You want more out of a career. A place to share your ideas freely — even if they’re daring or different. Where the true you can learn, grow, and thrive. At Verizon, we power and empower how people live, work and play by connecting them to what brings them joy. We do what we love — driving innovation, creativity, and impact in the world. Our V Team is a community of people who anticipate, lead, and believe that listening is where learning begins. In crisis and in celebration, we come together — lifting our communities and building trust in how we show up, everywhere & always. Want in? Join the #VTeamLife. What You’ll Be Doing... As a Senior Manager-Cloud Security in the Verizon Cyber Services, you’ll be responsible for ensuring that the data and processes that are used in public cloud platforms are secured and controlled so that application workloads in those cloud platforms are not exposed to unintended users or services. You will also be responsible for partnering with multiple stakeholders in framing and implementing security control frameworks for Cloud platforms AWS, GCP, AZURE and OCI. This position requires a highly motivated individual with a solid technical and analytics background and leadership capabilities, interfacing across the org and cross-functional teams to deliver programs. What We’re Looking For... Driving Cloud Security Strategy Planning & Execution: Drive efforts to strategize, plan and implement the short term and long term goals of building a robust, repeat-at-scale, efficient model for the multi cloud platform that will encompass standard north star architecture across domains/systems. It will also include SRE, DevSecOps, Security, Stability, Scalability and Availability tools and processes in place. Enable automation and proactive monitoring to detect issues and communicate to business proactively. Partnering with the product and business teams to align on priorities and onboarding their cloud infra into CISO security controls. Being responsible for delivery of security controls on aws, gcp, azure and oci cloud platforms across gts and product teams. Being able to communicate effectively with our customers to help them understand security issues and solutions as well as continuous delivery/Cloud concepts. Work closely with portfolio and product teams to build security, reliability, and scalability into the development lifecycle. Leading a team to build security automation tools to streamline and scale applications in the production environment and troubleshoot and resolve issues related to security compliance, deployment and operations Building reliable infrastructure services in Security Tooling to deliver highly available and scalable services. Use native Cloud infrastructure services such as EC2, EBS, Auto Scaling, Cloudwatch, etc. Looking continuously to automate and operationalize the manual / repetitive tasks. Architecting, designing and helping the team on automating cloud security controls and monitoring solutions Providing leadership with advanced capabilities to enable automation/integration across hybrid processing environments (LDAP, SSO, CI/CD, Cloud APIs, Messaging, Web, microservices, SAAS, ServiceNow, Networking...) Conducting POCs on services from security and risk stand points and create access management framework based on principle of least privileges Working with stakeholders from both applications as well as other cloud core teams to provide solutions that meet security and governance requirements while minimizing impact on developer productivity. Designing proactive monitoring, logging, audits and automated policy enforcement for security and cost compliance. Leading US and VZI development teams in a global delivery model setting to plan and deliver projects with aggressive deadlines. Providing technical leadership and subject matter expertise on large, highly complex projects. Evaluating, developing, and implementing scalable solutions to deliver business requirements. Enabling best in class developer ecosystem with needed access, data, stakeholder partnerships, work-life balance and employee long term career paths. Creating People & Tech Leadership Pipeline: Nurture development of talent into strategic roles both on technology and strategic management. Driving a Culture of Innovation: Champion a culture of innovation and drive as an example. Encouraging the team to participate in hackathons, coding events, and other org wide events and efforts. Motivating and training direct reports to maximize productivity. Coaching and mentoring team members to achieve assigned goals and objectives. You’ll Need To Have Bachelor’s degree or six or more years of work experience. Eight or more years of relevant work experience Experience in managing large scale cloud security infra projects from scratch, and handled requirements, design and the deliveries Strong people leader and a mentor, and maintains a very high level of engagement with the team members. Strong verbal and written communication skills Experience with multi cloud platform infrastructures in Azure, AWS, GCP and or OCI. Experience on Cloud Security & Governance practices and frameworks. Experience with modern source control repositories (e. g. Git) and DevOps toolsets (Jenkins/ Ansible etc.) and knowledge of Agile/ Scrum methodologies. Even better if you have one or more of the following: Experience in technology leadership, architecture, and Agile methodologies. Communication and stakeholder management skills. Problem solving skills to develop quick yet sound solutions to resolve complex issues. Experience in logging platforms in cloud infrastructure Experience in driving cloud security automation, delivering high performing and scalable applications. Experience with DevOps CI/CD processes to automate build and deployments. Experience mentoring and coaching diverse teams. Experience in identifying and analyzing and finding RCA for Errors and working with cross functional teams like SRE and backend team Experience with analytical tools and DBs like Postgres, Looker , ELK, ETL tools Recognizing, tracking, and communicating issues, accomplishments, milestones to the team and business partners. Good presentation and communication skills. If Verizon and this role sound like a fit for you, we encourage you to apply even if you don’t meet every “even better” qualification listed above. Where you’ll be working In this hybrid role, you'll have a defined work location that includes work from home and assigned office days set by your manager. Scheduled Weekly Hours 40 Equal Employment Opportunity Verizon is an equal opportunity employer. We evaluate qualified applicants without regard to race, gender, disability or any other legally protected characteristics.

Posted 6 days ago

Apply

5.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Location: Hyderabad Work Model: Hybrid (3 days from office) Experience Required: 5+ years 6 Months of contract position Role Summary We are hiring an experienced QA Automation Engineer for our client — a leading USA-based global bank. This is an Individual Contributor (IC) role based in Hyderabad with a hybrid work arrangement.You will be embedded within agile squads and play a critical role in validating UI flows, API integrations, and backend data consistency through a mix of manual and automation testing. The ideal candidate will have strong fundamentals in Selenium with Java, practical experience in writing or adapting automation scripts, and foundational knowledge of databases and API validation. Must-Have Skills Skill Skill Depth – Specific Components/Modules Selenium with Java (UI) Able to create and run basic Selenium test cases using Java. Should be comfortable working with element locators (XPath, CSS), waits (explicit/implicit), assertions, reusable methods, and PageFactory/PageObjectModel patterns. Full framework ownership not required. Cucumber (BDD) Able to work with Gherkin syntax (Given/When/Then) and write or modify basic step definitions in Java. Should have used Cucumber in at least one project. Knowledge of tag filters or advanced hooks is not required. Postman (API Testing) Able to send REST API requests and validate response bodies, headers, and status codes. Should understand basic parameterization and JSON response inspection. Scripting or Postman tests are optional. SQL / DB Testing Should be able to run SELECT queries with WHERE clauses to validate database values. Familiarity with basic JOINs is preferred but not mandatory. Advanced query tuning or DB procedures not expected. Agile Methodology Participated in sprints and agile ceremonies such as stand-ups and retrospectives. Should understand user stories, estimation basics, and defect lifecycle in an iterative context. Jira Able to log bugs, update user stories, and track ticket status using Jira. Understanding of workflows or advanced filters is not required. SDLC/STLC Knowledge Should understand phases like test planning, test case creation, execution, and defect logging. Familiarity with regression planning and release cycles is expected. Communication Skills Able to clearly describe bugs, test coverage, and task updates in internal meetings or stand-ups. Client presentations or documentation ownership not expected. Nice-to-Have Skills Skill Skill Depth – Specific Components/Modules Karate (API Automation) Awareness of Karate DSL syntax and structure. Experience with Postman/REST-assured accepted as a substitute. Full test suite ownership not required. TestNG / JUnit Familiarity with organizing test cases using annotations like @Test, @BeforeClass. Ability to run tests via IDE or build tools. Advanced grouping or parallel execution not required. CI/CD Integration (Jenkins / GitHub Actions) Exposure to automated test runs triggered via Jenkins or GitHub Actions. Ability to modify or create CI jobs is not necessary. Maven / Gradle Should know how to run automated test scripts using mvn test or Gradle from the terminal or within IDEs like IntelliJ/Eclipse. Dependency configuration is not required. Git Ability to pull latest test code, commit, and push. Merge conflict handling or rebase familiarity is optional. Cloud Testing Concepts General awareness of testing applications hosted in cloud environments like AWS or Azure. Environment provisioning or SaaS-layer testing not required.

Posted 6 days ago

Apply

5.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Provide advanced technical support to internal employees across distributed locations (NY, VA, and Chennai). Educate end-users on maximizing technology usage and improve tech adoption. Perform device provisioning, desktop imaging, software installations, and manage networking hardware. Monitor systems proactively using tools to identify and resolve performance issues. Ensure high availability and reliability of systems and applications through best practices. Manage infrastructure changes aligned with enterprise architecture and business continuity standards. Administer and support Microsoft 365 environments including Exchange Online, Teams, and SharePoint. Requirements Bachelor’s degree in Computer Science, Information Technology, or a related field. 5+ years of experience in systems administration with deep expertise in Microsoft 365 and Azure AD. Hands-on knowledge of MDM tools like Microsoft Intune/Endpoint Manager and identity/access management. Experience with ITIL-based incident/change management and troubleshooting Exchange scenarios. Strong interpersonal and communication skills with a patient, empathetic, and user-focused approach. Familiarity with both Windows and Mac environments; knowledge of Cisco Meraki, AirWatch, Atlassian, and NICE logging tools is a plus. Relevant certifications in Microsoft technologies or ITIL are preferred.

Posted 6 days ago

Apply

6.0 years

0 Lacs

Gurgaon, Haryana, India

On-site

Job Description: Senior MLOps Engineer Position: Senior MLOps Engineer Location: Gurugram Relevant Experience Required: 6+ years Employment Type: Full-time About The Role We are seeking a Senior MLOps Engineer with deep expertise in Machine Learning Operations, Data Engineering, and Cloud-Native Deployments . This role requires building and maintaining scalable ML pipelines , ensuring robust data integration and orchestration , and enabling real-time and batch AI systems in production. The ideal candidate will be skilled in state-of-the-art MLOps tools , data clustering , big data frameworks , and DevOps best practices , ensuring high reliability, performance, and security for enterprise AI workloads. Key Responsibilities MLOps & Machine Learning Deployment Design, implement, and maintain end-to-end ML pipelines from experimentation to production. Automate model training, evaluation, versioning, deployment, and monitoring using MLOps frameworks. Implement CI/CD pipelines for ML models (GitHub Actions, GitLab CI, Jenkins, ArgoCD). Monitor ML systems in production for drift detection, bias, performance degradation, and anomaly detection. Integrate feature stores (Feast, Tecton, Vertex AI Feature Store) for standardized model inputs. Data Engineering & Integration Design and implement data ingestion pipelines for structured, semi-structured, and unstructured data. Handle batch and streaming pipelines with Apache Kafka, Apache Spark, Apache Flink, Airflow, or Dagster. Build ETL/ELT pipelines for data preprocessing, cleaning, and transformation. Implement data clustering, partitioning, and sharding strategies for high availability and scalability. Work with data warehouses (Snowflake, BigQuery, Redshift) and data lakes (Delta Lake, Lakehouse architectures). Ensure data lineage, governance, and compliance with modern tools (DataHub, Amundsen, Great Expectations). Cloud & Infrastructure Deploy ML workloads on AWS, Azure, or GCP using Kubernetes (K8s) and serverless computing (AWS Lambda, GCP Cloud Run). Manage containerized ML environments with Docker, Helm, Kubeflow, MLflow, Metaflow. Optimize for cost, latency, and scalability across distributed environments. Implement infrastructure as code (IaC) with Terraform or Pulumi. Real-Time ML & Advanced Capabilities Build real-time inference pipelines with low latency using gRPC, Triton Inference Server, or Ray Serve. Work on vector database integrations (Pinecone, Milvus, Weaviate, Chroma) for AI-powered semantic search. Enable retrieval-augmented generation (RAG) pipelines for LLMs. Optimize ML serving with GPU/TPU acceleration and ONNX/TensorRT model optimization. Security, Monitoring & Observability Implement robust access control, encryption, and compliance with SOC2/GDPR/ISO27001. Monitor system health with Prometheus, Grafana, ELK/EFK, and OpenTelemetry. Ensure zero-downtime deployments with blue-green/canary release strategies. Manage audit trails and explainability for ML models. Preferred Skills & Qualifications Core Technical Skills Programming: Python (Pandas, PySpark, FastAPI), SQL, Bash; familiarity with Go or Scala a plus. MLOps Frameworks: MLflow, Kubeflow, Metaflow, TFX, BentoML, DVC. Data Engineering Tools: Apache Spark, Flink, Kafka, Airflow, Dagster, dbt. Databases: PostgreSQL, MySQL, MongoDB, Cassandra, DynamoDB. Vector Databases: Pinecone, Weaviate, Milvus, Chroma. Visualization: Plotly Dash, Superset, Grafana. Tech Stack Orchestration: Kubernetes, Helm, Argo Workflows, Prefect. Infrastructure as Code: Terraform, Pulumi, Ansible. Cloud Platforms: AWS (SageMaker, S3, EKS), GCP (Vertex AI, BigQuery, GKE), Azure (ML Studio, AKS). Model Optimization: ONNX, TensorRT, Hugging Face Optimum. Streaming & Real-Time ML: Kafka, Flink, Ray, Redis Streams. Monitoring & Logging: Prometheus, Grafana, ELK, OpenTelemetry.

Posted 1 week ago

Apply

0 years

0 Lacs

Pune, Maharashtra, India

On-site

We're seeking a highly skilled and motivated Lead Java Backend Developer to join our client's team. This is a full-time, Work From Office (WFO) role where you'll play a crucial part in designing, developing, and deploying robust backend solutions. **********F2F Mandate for Final Discussion.. Initially Mumbai location (2-3months) & then relocate to any client locations in Pune, Hyderabad, Banagalore********** Responsibilities: Lead the design, development, and implementation of highly scalable and reliable backend services using Java and Spring Boot. Architect and build microservices-based applications. Strong proficiency in Spring Boot and building RESTful APIs. Experience with message brokers like Kafka. Collaborate with cross-functional teams to define, design, and ship new features. Ensure the performance, quality, and responsiveness of applications. Mentor junior developers and contribute to a culture of technical excellence. Drive the adoption of best practices, including SOLID Principles and Clean Architecture. Participate in code reviews to maintain high code quality. Hands-on experience with Git for version control. Proficiency in SQL and working with relational databases. Familiarity with monitoring and logging tools such as ELK (Elasticsearch, Logstash, Kibana). Experience with Cassandra is a plus.

Posted 1 week ago

Apply

0 years

0 Lacs

Hyderabad, Telangana, India

On-site

About the Role We are looking for a Systems Integration Engineer to join our Z2 Enterprise platform team. This role focuses on integrating enterprise systems (PLM, ERP, CRM, etc.) such as Agile PLM, Teamcenter, Windchill, SAP, Salesforce, and others , building bi-directional connectors and integration flows . You will be responsible for enabling smooth data synchronization between Z2 Enterprise and external enterprise applications through APIs, middleware, and ETL pipelines. Key Responsibilities Integration Design & Development Analyze requirements and design integration workflows between Z2 Enterprise and external systems (PLM, ERP, CRM). Develop connectors, APIs, and middleware services to enable bi-directional data exchange. Implement real-time (API/webhook) and batch/scheduled data flows . System Connectivity & APIs Work with REST, SOAP, OData, GraphQL APIs , and message queues to integrate heterogeneous systems. Understand and leverage vendor-specific APIs for Agile PLM, Teamcenter, Windchill, SAP, Salesforce, etc. Data Transformation & Synchronization Map and transform data models and schemas between different enterprise systems. Ensure data quality, consistency, and conflict resolution during integration. Monitoring & Maintenance Set up monitoring, logging, and alerting for integration pipelines. Troubleshoot and resolve integration failures in a timely manner. Collaboration Partner with product managers, enterprise architects, and engineering teams to define integration strategies. Document APIs, workflows, and best practices. Required Skills and Qualifications Enterprise Systems Expertise Hands-on experience with PLM systems (Agile PLM, Teamcenter, Windchill, etc.) . Familiarity with ERP (SAP, Oracle EBS, NetSuite) and CRM (Salesforce, Dynamics 365) . Integration & API Development Strong experience with RESTful and SOAP API design and consumption . Familiarity with API management platforms (Mulesoft, Boomi, Dell Boomi, Apigee, etc.) or custom middleware. Knowledge of message brokers (Kafka, RabbitMQ, ActiveMQ) . Programming & Data Skills Proficiency in Java, Spring Boot, or Node.js for building integration services. Experience with ETL processes, JSON/XML data transformation , and schema mapping. Security & Best Practices Knowledge of authentication/authorization standards (OAuth2, SAML, SSO) . Strong debugging and problem-solving skills. Preferred Qualifications Experience working with large-scale enterprise integration projects . Familiarity with iPaaS (Integration Platform as a Service) tools . Understanding of enterprise data modeling and master data management (MDM) concepts. Exposure to cloud services (AWS, Azure, GCP) for hosting integrations. Education Bachelor’s degree in Computer Science, Engineering, or related field. What We Offer Work on complex, high-impact integrations between Z2 Enterprise and global enterprise systems. Competitive salary and comprehensive benefits. A collaborative, cross-functional work environment. Opportunities to work on cutting-edge integration technologies and enterprise architectures .

Posted 1 week ago

Apply

0 years

0 Lacs

Hyderabad, Telangana, India

On-site

About the Role We are looking for an experienced DevOps Engineer to join our engineering team. This role involves setting up, managing, and scaling development, staging, and production environments both on AWS cloud and on-premise (open source stack) . You will be responsible for CI/CD pipelines, infrastructure automation, monitoring, container orchestration, and model deployment workflows for our enterprise applications and AI platform. Key Responsibilities Infrastructure Setup & Management Design and implement cloud-native architectures on AWS and be able to manage on-premise open source environments when required . Automate infrastructure provisioning using tools like Terraform or CloudFormation. Maintain scalable environments for dev, staging, and production . CI/CD & Release Management Build and maintain CI/CD pipelines for backend, frontend, and AI workloads. Enable automated testing, security scanning, and artifact deployments. Manage configuration and secret management across environments. Containerization & Orchestration Manage Docker-based containerization and Kubernetes clusters (EKS, self-managed K8s) . Implement service mesh, auto-scaling, and rolling updates. Monitoring, Security, and Reliability Implement observability (logging, metrics, tracing) using open source or cloud tools. Ensure security best practices across infrastructure, pipelines, and deployed services. Troubleshoot incidents, manage disaster recovery, and support high availability. Model DevOps / MLOps Set up pipelines for AI/ML model deployment and monitoring (LLMOps). Support data pipelines, vector databases, and model hosting for AI applications. Required Skills and Qualifications Cloud & Infra Strong expertise in AWS services : EC2, ECS/EKS, S3, IAM, RDS, Lambda, API Gateway, etc. Ability to set up and manage on-premise or hybrid environments using open source tools. DevOps & Automation Hands-on experience with Terraform / CloudFormation . Strong skills in CI/CD tools such as GitHub Actions, Jenkins, GitLab CI/CD, or ArgoCD. Containerization & Orchestration Expertise with Docker and Kubernetes (EKS or self-hosted). Familiarity with Helm charts, service mesh (Istio/Linkerd). Monitoring / Observability Tools Experience with Prometheus, Grafana, ELK/EFK stack, CloudWatch . Knowledge of distributed tracing tools like Jaeger or OpenTelemetry. Security & Compliance Understanding of cloud security best practices . Familiarity with tools like Vault, AWS Secrets Manager. Model DevOps / MLOps Tools (Preferred) Experience with MLflow, Kubeflow, BentoML, Weights & Biases (W&B) . Exposure to vector databases (pgvector, Pinecone) and AI pipeline automation . Preferred Qualifications Knowledge of cost optimization for cloud and hybrid infrastructures . Exposure to infrastructure as code (IaC) best practices and GitOps workflows. Familiarity with serverless and event-driven architectures . Education Bachelor’s degree in Computer Science, Engineering, or related field (or equivalent experience). What We Offer Opportunity to work on modern cloud-native systems and AI-powered platforms . Exposure to hybrid environments (AWS and open source on-prem). Competitive salary, benefits, and growth-oriented culture.

Posted 1 week ago

Apply

3.0 years

0 Lacs

Noida, Uttar Pradesh, India

On-site

Job Summary: We are looking for a highly skilled Java Full Stack Developer with over 3 years of experience in backend and frontend development. The ideal candidate should have strong expertise in Java, Spring Boot, Kafka, and J2EE on the backend, along with hands-on experience in Angular or React for frontend development. Familiarity with cloud platforms, Git, Jenkins, and writing test cases is essential. Key Responsibilities: Develop and maintain robust backend services using Java, Spring Boot, Kafka, and J2EE. Build responsive and dynamic user interfaces using Angular or React. Write and maintain unit and integration test cases to ensure code quality. Collaborate with cross-functional teams to define, design, and deliver new features. Deploy and manage applications on cloud platforms (GCP, AWS, Azure etc.). Use Git for version control and Jenkins for CI/CD pipelines. Troubleshoot and resolve issues across the full stack in a timely manner. Follow best practices for coding, testing, and deployment. Required Skills & Qualifications: Minimum 3 years of experience in full stack development. Strong backend development skills in Java, Spring Boot, Kafka, and J2EE. Proficiency in frontend frameworks: Angular or React. Experience in writing unit and integration tests (e.g., JUnit, TestNG). Familiarity with cloud platforms such as AWS, Azure, or GCP. Proficient in using Git and Jenkins. Solid understanding of RESTful APIs and microservices architecture. Strong analytical and problem-solving skills. Excellent communication and teamwork abilities. Preferred Qualifications: Experience with Docker and Kubernetes. Familiarity with monitoring and logging tools. Exposure to Agile/Scrum methodologies.

Posted 1 week ago

Apply

5.0 years

0 Lacs

Bangalore Urban, Karnataka, India

On-site

Python developer We are looking for a talented Senior Software Developer with strong Python skills. You will join our AI Solutions team. We focus on creating advanced AI systems in the Electrification field. In this role, you will work with AI/ML engineers, data scientists, and cloud architects. Together, you will develop robust, scalable software solutions using the latest AI technologies. This is a great chance to work where AI innovation meets highperformance backend systems and cloud-native development. Responsibilities Write highquality, testable, and maintainable Python code using object-oriented programming (OOP), SOLID principles, and design patterns. Develop RESTful APIs and backend services for AI/ML model serving using FastAPI. Collaborate with AI/ML engineers to integrate and deploy Machine Learning, Deep Learning, and Generative AI models into production environments. Contribute to software architecture and design discussions to ensure scalable and cient solutions. Implement CI/CD pipelines and adhere to DevOps best practices for reliable and repeatable deployments. Design for observability, incorporating structured logging, performance monitoring, and alerting mechanisms. Optimize code and system performance, ensuring reliability and robustness at scale. Participate in code reviews, promote clean code practices, and mentor junior developers when needed. Required Qualifications Bachelor or Master degree in Computer Science, IT, or a related field.5+ years of handson experience in software development, with a focus on Python. Deep understanding of OOP concepts, software architecture, and design patterns. Experience with backend web frameworks, preferably FastAPI. Familiarity with integrating ML/DL models into software solutions. Practical experience with CI/CD, containerization (Docker), and version control systems (Git). Exposure to MLOps practices and tools for model deployment and monitoring. Strong collaboration and communication skills in crossfunctional engineering teams. Familiarity with cloud platforms like AWS (e.g., Sagemaker, Bedrock) or Azure (e.g., ML Studio, OpenAI Service). Preferred Qualifications Experience in Rust is a strong plus. Experience working on highperformance, scalable backend systems. Exposure to logging/monitoring stacks like Prometheus, Grafana, ELK, or OpenTelemetry. Understanding of data engineering concepts, ETL pipelines, and processing large datasets. Background or interest in the Power and Energy domain is a plus. Mandatory Skills Python AI/ML model serving using FastAPI. ML/DL models MLOps practices

Posted 1 week ago

Apply

5.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

About NetApp NetApp is the intelligent data infrastructure company, turning a world of disruption into opportunity for every customer. No matter the data type, workload or environment, we help our customers identify and realize new business possibilities. And it all starts with our people. If this sounds like something you want to be part of, NetApp is the place for you. You can help bring new ideas to life, approaching each challenge with fresh eyes. Of course, you won't be doing it alone. At NetApp, we're all about asking for help when we need it, collaborating with others, and partnering across the organization - and beyond. Job Summary NetApp is a cloud-led, data-centric software company that helps organizations put data to work in applications that elevate their business. We help organizations unlock the best of cloud technology. We are seeking a skilled and innovative Cloud Engineer to join our team. As a Cloud Engineer, you will be responsible for developing and maintaining cloud-based solutions, with a focus on coding complex problems, automation and collaborating with the Site Reliability Engineering team for feature deployment in production. You will also be responsible for designing and implementing managed Cloud Services according to the given requirements. Additionally, you should be able to quickly learn the existing code and architecture. Job Requirements Develop, test, and maintain cloud-based applications and services using Go Lang, Python or proficiency in any of these languages Java, C++, .NET core or Ruby. Write clean, efficient, and maintainable code to solve complex problems and improve system performance. Automate deployment, scaling, and monitoring of cloud-based applications and infrastructure. Proficiency in using AI tools like Copilot to enhance productivity in automation, documentation, and unit test writing. Solid understanding of cloud computing concepts and services (e.g., AWS, Azure, Google Cloud). Experience with containerization technologies (e.g., Docker, Kubernetes) and infrastructure-as-code tools (e.g., Terraform, CloudFormation). Proficient in designing and implementing RESTful APIs and microservices architectures. Familiarity with CI/CD pipelines and tools (e.g., Jenkins, GitLab CI/CD). Knowledge of networking concepts, security best practices, and system administration. Knowledge of database technologies (e.g., SQL, NoSQL) and data storage solutions. Familiarity with monitoring and logging tools (e.g., Prometheus, ELK stack). Education Minimum 5 years of experience and must be hands-on with coding. B.E/B.Tech or M.S in Computer Science or related technical field. At NetApp, we embrace a hybrid working environment designed to strengthen connection, collaboration, and culture for all employees. This means that most roles will have some level of in-office and/or in-person expectations, which will be shared during the recruitment process. Equal Opportunity Employer: NetApp is firmly committed to Equal Employment Opportunity (EEO) and to compliance with all laws that prohibit employment discrimination based on age, race, color, gender, sexual orientation, gender identity, national origin, religion, disability or genetic information, pregnancy, and any protected classification. Why NetApp? We are all about helping customers turn challenges into business opportunity. It starts with bringing new thinking to age-old problems, like how to use data most effectively to run better - but also to innovate. We tailor our approach to the customer's unique needs with a combination of fresh thinking and proven approaches. We enable a healthy work-life balance. Our volunteer time off program is best in class, offering employees 40 hours of paid time off each year to volunteer with their favourite organizations. We provide comprehensive benefits, including health care, life and accident plans, emotional support resources for you and your family, legal services, and financial savings programs to help you plan for your future. We support professional and personal growth through educational assistance and provide access to various discounts and perks to enhance your overall quality of life. If you want to help us build knowledge and solve big problems, let's talk.

Posted 1 week ago

Apply

5.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

About the Role We are looking for a Senior AI Engineer to join our AI Platform team and play a critical role in designing, developing, and scaling next-generation AI-driven applications . You will work on Generative AI (GenAI), Agentic AI systems, AI-driven workflows, and intelligent automation , bringing together LLMs, RAG (Retrieval-Augmented Generation), embeddings, and model refinement techniques into production-ready solutions. This is a hands-on role that combines AI/ML expertise, software engineering skills, and product thinking to deliver intelligent enterprise applications. Key Responsibilities AI Application Development Design and build end-to-end AI applications , including prompt engineering, chaining, and orchestration. Implement agentic AI systems that can reason, plan, and execute workflows. Integrate AI with existing enterprise workflows and systems. LLMs and RAG Pipelines Develop RAG pipelines (embedding, vector search, retrieval) for contextual, domain-specific AI applications. Work with LLM APIs and open-source models (OpenAI, Anthropic, Llama, etc.) , and fine-tune or adapt models to domain requirements. Optimize context injection, prompt construction, and multi-turn conversational flows. Model Evaluation & Refinement Continuously evaluate AI model outputs and refine performance through prompt tuning, fine-tuning, and RLHF or similar techniques. Establish automated pipelines for model monitoring, feedback collection, and iterative improvements . Platform & Engineering Work with vector databases (pgvector, Pinecone, Weaviate, etc.), knowledge graphs, and embeddings at scale. Build API endpoints and microservices that expose AI capabilities in a scalable, production-grade manner. Ensure observability, logging, versioning, and A/B testing for AI services. Collaboration Collaborate with product managers, data engineers, and software engineers to define AI-driven features. Stay current with emerging trends in AI/ML, LLM architectures, and multi-agent systems . Required Skills and Qualifications Core Expertise Minimum 5+ years of experience in AI/ML engineering with a focus on applied AI solutions. Hands-on experience with LLMs (OpenAI, Anthropic, Hugging Face, etc.) and prompt engineering. Strong experience building RAG pipelines with embeddings and vector search . Software Engineering Proficiency in Python (FastAPI/Flask), or Java/Spring Boot for AI service development . Knowledge of microservices, APIs, and cloud-native architectures (AWS/GCP/Azure) . Data & Vector Systems Familiarity with pgvector, Pinecone, Weaviate, Milvus, or similar vector databases . Experience with data preprocessing, cleaning, and knowledge management for AI applications. AI Ops & Lifecycle Strong understanding of MLOps/LLMOps principles : model deployment, monitoring, retraining, and lifecycle management. Ability to evaluate AI outputs and incorporate feedback loops for model improvement. Soft Skills Problem-solving, analytical thinking, and a passion for innovation. Excellent communication and collaboration skills. Preferred Qualifications Experience building agentic AI systems or autonomous multi-agent workflows . Exposure to knowledge graphs and reasoning systems . Background in NLP, transformers, embeddings, and vector similarity search . Contributions to open-source AI/ML projects . Education Bachelor’s or Master’s degree in Computer Science, AI/ML, Data Science, or a related field. What We Offer Opportunity to build an AI platform from the ground up , shaping the future of enterprise AI. Work with cutting-edge AI technologies and research. Competitive compensation and benefits. A collaborative environment that values innovation and experimentation.

Posted 1 week ago

Apply

5.0 - 9.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Join Amgen’s Mission of Serving Patients At Amgen, if you feel like you’re part of something bigger, it’s because you are. Our shared mission—to serve patients living with serious illnesses—drives all that we do. Since 1980, we’ve helped pioneer the world of biotech in our fight against the world’s toughest diseases. With our focus on four therapeutic areas –Oncology, Inflammation, General Medicine, and Rare Disease– we reach millions of patients each year. As a member of the Amgen team, you’ll help make a lasting impact on the lives of patients as we research, manufacture, and deliver innovative medicines to help people live longer, fuller happier lives. Our award-winning culture is collaborative, innovative, and science based. If you have a passion for challenges and the opportunities that lay within them, you’ll thrive as part of the Amgen team. Join us and transform the lives of patients while transforming your career. [Software Development Engineer-Test II] What You Will Do Let’s do this. Let’s change the world. In this vital role you will be working closely with product managers, designers, and other engineers to create high-quality, scalable software solutions and automating operations, monitoring system health, and responding to incidents to minimize downtime. Ø Test Automation & Framework Development Design, develop, and maintain scalable test automation frameworks (UI, API, performance). Implement reusable test libraries and utilities to accelerate test development. Ø Test Planning & Execution Collaborate with Product, Development, and DevOps teams to define test strategies, scope, and acceptance criteria. Author, review, and execute automated and manual test cases for new features and bug fixes. Ø Continuous Integration & Deployment Integrate automated tests into CI/CD pipelines (e.g., Jenkins, GitLab CI, Azure DevOps). Monitor build health, triage failures, and work with developers to resolve test stability issues. Ø Defect Management & Reporting Track, document, and prioritize defects; work with cross-functional teams to ensure timely resolution. Generate and present test reports, metrics, and dashboards to leadership. Ø Performance & Security Testing (as applicable) Design and run performance/load tests using tools like JMeter, Gatling, or similar. Collaborate with security teams to integrate automated security scans and address vulnerabilities. What We Expect Of You We are all different, yet we all use our unique contributions to serve patients. Basic Qualifications: Bachelor’s degree/Master’s degree and 5 to 9 years of Computer Science, IT or related field experience Preferred Qualifications: Functional Skills: Programming in at least one modern language (e.g., Java, C#, Python, JavaScript/TypeScript). Hands-on experience with test automation frameworks (e.g., Selenium, Cypress, Playwright, REST Assured). Familiarity with API testing tools (e.g., Postman, SoapUI) and related libraries. Familiar with testing AI models Solid understanding of CI/CD practices and tools (Jenkins, GitHub Actions, Azure DevOps). Working knowledge of version control systems (Git). Good-to-Have Skills: Strong understanding of cloud platforms (e.g., AWS, GCP, Azure) and containerization technologies (e.g., Docker, Kubernetes) Experience with monitoring and logging tools (e.g., Prometheus, Grafana, Splunk) Experience with data processing tools like Hadoop, Spark, or similar Experience with SAP integration technologies Soft Skills: Excellent analytical and troubleshooting skills Strong verbal and written communication skills Ability to work effectively with global, virtual teams High degree of initiative and self-motivation Ability to manage multiple priorities successfully Team-oriented, with a focus on achieving team goals Strong presentation and public speaking skills What You Can Expect Of Us As we work to develop treatments that take care of others, we also work to care for your professional and personal growth and well-being. From our competitive benefits to our collaborative culture, we’ll support your journey every step of the way. In addition to the base salary, Amgen offers competitive and comprehensive Total Rewards Plans that are aligned with local industry standards. Apply now and make a lasting impact with the Amgen team. careers.amgen.com As an organization dedicated to improving the quality of life for people around the world, Amgen fosters an inclusive environment of diverse, ethical, committed and highly accomplished people who respect each other and live the Amgen values to continue advancing science to serve patients. Together, we compete in the fight against serious disease. Amgen is an Equal Opportunity employer and will consider all qualified applicants for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, protected veteran status, disability status, or any other basis protected by applicable law. We will ensure that individuals with disabilities are provided reasonable accommodation to participate in the job application or interview process, to perform essential job functions, and to receive other benefits and privileges of employment. Please contact us to request accommodation.

Posted 1 week ago

Apply

3.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Introduction Develop and maintain user-friendly web applications with React.js. Write clean, maintainable, and efficient code using HTML, CSS, JavaScript (ES6+), and TypeScript. Work closely with UX/UI designers to bring mockups to life with responsive and accessible designs. Optimize applications for speed, scalability, and cross-browser compatibility. Implement and maintain front-end state management solutions such as Redux. Collaborate with back-end developers to integrate APIs and ensure smooth data flow. Debug and resolve front-end issues, improving performance and usability. Stay updated with the latest front-end technologies and industry trends. Your Role And Responsibilities TheIBM Storage Software Engineering team is looking for Software Engineer to join us in Bangalore, India. In this role, you will work on user interface design and development for Openshift Data Foundation. You will have the opportunity to work alongside a team of software developers, product designers, quality assurance engineers, and the open source community. You’ll add and enhance features, automate testing of features, and create example code and documentation. Required Technical And Professional Expertise 3-5 years of experience in front-end development. Strong proficiency in React.js and ecosystem tools. Experience with TypeScript. Proficiency in modern CSS frameworks like SCSS. Familiarity with version control systems like Git and CI/CD pipelines. Understanding of performance optimization techniques (lazy loading, caching, etc.). Knowledge of testing frameworks such as Cypress, or React Testing Library. Knowledge of monitoring tools (Prometheus) and logging frameworks. Experience with Agile methodologies and working in a collaborative team environment. Preferred Technical And Professional Experience Knowledge of Opensource development, and working experience in Opensource projects. Familiarity with cloud platforms (AWS, Azure, GCP) and their storage services. Experience with container orchestration tools such as Kubernetes. Ability to work effectively in a collaborative, cross-functional team environment.

Posted 1 week ago

Apply

7.0 years

0 Lacs

Andhra Pradesh, India

On-site

Key Responsibilities Update Mechanism & Distribution Testing Assess update delivery pipeline for unauthorized access, misconfigurations, or delivery flaws. Simulate HMAC token forge/replay attacks to test authentication robustness. Test code-signing integrity by attempting to modify signed update bundles. Simulate rollback scenarios, downgrade attack vectors, and patch bypass attempts. Backend & Infrastructure Security Perform RBAC abuse tests to detect privilege escalation opportunities. Verify audit logging and forensic traceability of system actions. Check backend service configurations for policy compliance and data protection. Availability & Threat Resilience Conduct DoS resilience testing by simulating excessive/malformed requests. Perform mobile reverse engineering to detect information leakage or insecure storage. Reporting & Retesting Provide a detailed vulnerability report with CVSS scores and POC evidence. Collaborate with DevSecOps for remediation validation and re-testing. Required Skills & Qualifications 7+ years of penetration testing experience in enterprise environments. Deep knowledge of OWASP Top 10 (Web, API, Mobile). Hands-on experience testing mobile hybrid apps (Capacitor/Ionic). Expertise in code signing, HMAC validation, and secure OTA update mechanisms. Familiarity with Azure-hosted services, WebAPI, and SQL Server. Proficient with tools such as Burp Suite, MobSF, Frida, Drozer, OWASP ZAP, Metasploit, Postman, Wireshark. Strong scripting/debugging knowledge (Python, JavaScript, Bash). Understanding of regulatory/compliance frameworks: ISO 27001, GDPR, NIST. Certifications preferred: OSCP, CEH, GMOB, GWAPT. Additional Context App Architecture: Hybrid (Ionic + Capacitor) Backend: .NET Core, WebAPI, Azure Blob Storage CI/CD: Azure DevOps, App Center Governance: Scoped under Qatar Airways IT & Cyber Security policies

Posted 1 week ago

Apply

5.0 years

0 Lacs

Delhi, India

Remote

Position Title: Infrastructure Solution Architect Position Type: Regular - Full-Time Position Location: New Delhi Requisition ID: 32004 Job Purpose As a Cloud Infrastructure Solution Architect, you'll drive the success of our IT Architecture program through your design expertise and consultative approach. You'll collaborate with stakeholders to understand their technical requirements, designing and documenting tailored solutions. Your blend of architecture and operations experience will enable you to accurately size work efforts and determine the necessary skills and resources for projects. Strong communication, time management, and process skills are essential for success in this role. You should have deep experience in defining Infrastructure solutions: Design, Architecture and Solution Building blocks. Role Overview The cloud infrastructure architect role helps teams (such as product teams, platform teams and application teams) successfully adopt cloud infrastructure and platform services. It is heavily involved in design and implementation activities that result in new or improved cloud-related capabilities, and it brings skills and expertise to such areas as cloud technical architecture (for a workload’s use of infrastructure as a service [IaaS] and platform as a service [PaaS] components); automating cloud management tasks, provisioning and configuration management; and other aspects involved in preparing and optimizing cloud solutions. Successful outcomes are likely to embrace infrastructure-as-code (IaC), DevOps and Agile ways of working and associated automation approaches, all underpinned by the cloud infrastructure engineer’s solid understanding of networking and security in the cloud. The nature of the work involved means that the cloud infrastructure engineer will directly engage with customer teams, but will also work on cloud infrastructure platform capabilities that span multiple teams. The cloud infrastructure architect collaborates closely with other architects, product/platform teams, software developers, Cloud Engineers, site reliability engineers (SREs), security, and network specialists, as well as other roles, particularly those in the infrastructure and operations. Being an approachable team-player is therefore crucial for success, and willingness to lead initiatives is important too. The cloud infrastructure engineer also supports colleagues with complex (escalated) operational concerns in areas such as deployment activities, event management, incident and problem management, availability, capacity and service-level management, as well as service continuity. The cloud infrastructure architect is expected to demonstrate strong attention to detail and a customer-centric mindset. Inquisitiveness, determination, creativity, communicative and collaboration skills are important qualities too. Key Responsibilities Provide expert knowledge on cloud infrastructure and platforms solutions architecture, to ensure our organization achieves its goals for cloud adoption. This involves translating cloud strategy and architecture into efficient, resilient, and secure technical implementations. Define cloud infrastructure landing zones, regional subscriptions, Availability Zone, to ensure HA, resiliency and reliability of Infrastructure and applciations Offer cloud-engineering thought leadership in areas to define specific cloud use cases, cloud service providers, and/or strategic tools and technologies Support cloud strategy working on new cloud solutions including analysing requirements, supporting technical architecture activities, prototyping, design and development of infrastructure artifacts, testing, implementation, and the preparation for ongoing support. Work on cloud migration projects, including analyzing requirements and backlogs, identifying migration techniques, developing migration artifacts, executing processes, and ensuring preparations for ongoing support. Design, build, deliver, maintain and improve infrastructure solutions. This includes automation strategies such as IaC, configuration-as-code, policy-as-code, release orchestration and continuous integration/continuous delivery (CI/CD) pipelines, and collaborative ways of working (e.g., DevOps). Participate in change and release management processes, carrying out complex provisioning and configuration tasks manually, where needed. Research and prototype new tools and technologies to enhance cloud platform capabilities. Proactively identify innovative ways to reduce toil, and teach, coach or mentor others to improve cloud outcomes using automation. Improve reliability, scalability and efficiency by working with product engineers and site reliability engineers to ensure well-architected and thoughtfully operationalized cloud infrastructures. This includes assisting with nonfunctional requirements, such as data protection, high availability, disaster recovery, monitoring requirements and efficiency considerations in different environments. Provide subject matter expertise for all approved IaaS and PaaS services, respond promptly to escalated incidents and requests, and build reusable artifacts ready for deployment to cloud environments. Exert influence that lifts cloud engineering competency by participating in (and, where applicable, leading) organizational learning practices, such as communities of practice, dojos, hackathons and centers of excellence (COEs). Actively participate in mentoring. Practice continuous improvement and knowledge sharing (e.g., providing KB articles, training and white papers). Participate in planning and optimization activities, including capacity, reliability, cost management and performance engineering. Establish FinOps Practices — Cloud Cost management, Scale up/down, Environment creation/deletion based on consumption Work closely with security specialists to design, implement and test security controls, and ensure engineering activities align to security configuration guidance. Establish logging, monitoring and observability solutions, including identification of requirements, design, implementation and operationalization. Optimize infrastructure integration in all scenarios — single cloud, multicloud and hybrid. Convey the pros and cons of cloud services and other cloud engineering topics to others at differing levels of cloud maturity and experience, and in different roles (e.g., developers and business technologists). Be forthcoming and open when the cloud is not the best solution. Work closely with third-party suppliers, both as an individual contributor and as a project lead, when required. Engage with vendor technical support as the customer lead role when appropriate. Participate/Lead problem management activities, including post-mortem incident analysis, providing technical insight, documented findings, outcomes and recommendations as part of a root cause analysis. Support resilience activities — e.g., disaster recovery (DR) testing, performance testing and tabletop planning exercises. The role holder is also expected to: Ensure that activities are tracked and auditable by leveraging service enablement systems, logging activity in the relevant systems of record, and following change and release processes. Collaborate with peers from other teams, such as security, compliance, enterprise architecture, service governance, and IT finance to implement technical controls to support governance, as necessary. Work in accordance with the organization’s published standards and ensure that services are delivered in compliance with policy. Promptly respond to requests for engineering assistance from technical customers as needed. Provide engineering support, present ideas and create best-practice guidance materials. Strive to meet service-level expectations. Foster ongoing, closer and repeatable engagement with customers to achieve better, scalable outcomes. Take ownership of personal development, working with line management to identify development opportunities. Work with limited guidance, independently and/or as part of a team on complex problems, potentially requiring close collaboration with remotely based employees and third-party providers. Follow standard operating procedures, propose improvements and develop new standard operating procedures to further industrialize our approach. Advocate for simplification and workflow optimization, and follow documentation standards. Skills And Experience Skills and Experience in the following activities/working styles is essential: Collaboration with developers (and other roles, such as SREs and DevSecOps Engineers) to plan, design, implement, operationalize and problem solve workloads that leverage cloud infrastructure and platform services. Working in an infrastructure or application support team. Cloud migration project experience. [Data center to Cloud IAAS, Cloud Native, Hybrid Cloud] Securing cloud platforms and cloud workloads in collaboration with security teams. Familiarity or experience with DevOps/DevSecOps. Agile practices (such as Scrum/Sprints, Customer Journey Mapping, Kanban). Proposing new standards, addressing peer feedback and advocating for improvement. Understanding of software engineering principles (source control, versioning, code reviews, etc.) Working in an environment that complies with Health and, Manufacturing Event-based architectures and associated infrastructure patterns Experience working with specific technical teams: [R&D teams, Data and analytics teams, etc.] Experience where immutable infrastructure approaches have been used Implementing highly available systems, using multi-AZ and multi region approaches Skills And Experience In The Following Technology Areas Experience with Azure, GCP, AWS, SAP cloud provider services (Azure and SAP preferred) Experience with these cloud provider services is preferred: Infra, Data, App, API and Integration Services DevOps-tooling such as CI/CD (e.g., Jenkins, Jira, Confluence, Azure DevOps/ADO, TeamCity, GitHub, GitLab) Infrastructure-as-code approaches, role-specific automation tools and associated programming languages (e.g., Ansible, ARM, Chef, Cloud Formation, Pulumi, Puppet, Terraform, Salt, AWS CDK, Azure SDK) Orchestration Tools (e.g., Morpheus Data, env0, Cloudify, Pliant, Quali, RackN, VRA, Crossplane, ArgoCD) Knowledge of software development frameworks/Languages; [e.g., Spring, Java, GOlang, PHP, Python] Container management (e.g., Docker, Rancher, Kubernetes, AKS, EKS, GKE, RHOS, VMware Tanzu) Virtualization platforms (e.g., VMware, Hyper-V) Operating systems (e.g., Windows and Linux including scripting experience) Database technologies and caching (e.g., Postgres, MSSQL, NoSQL, Redis, CDN) Identity and access management (e.g., Active Directory/Azure AD, Group Policy, SSO, cloud RBAC and hierarchy and federation) Monitoring tools (e.g., AWS CloudWatch, Elastic Stack (Elastic Search/Logstash/Kibana), Datadog, LogicMonitor, Splunk) Cloud networking (e.g., Subnetting, Route Tables, Security Groups, VPC, VPC Peering, NACLS, VPN, Transit Gateways, optimizing for egress costs) Cloud security (e.g., key management services, encryption, other core security services/controls the organization uses) Landing Zone Automation solutions (e.g., AWS Control tower) Policy guardrails (e.g., policy-as-code approaches, cloud provider native policy tools, Hashicorp Sentinel, Open Policy Agent) Scalable architectures, including APIs, microservices and PaaS. Analyzing cloud spending and optimizing resources (e.g., Apptio Cloudability, Flexera One, IBM Turbonomic, Netapp Spot, VMware CloudHealth) Implementing resilience (e.g., multi-AZ, multi-region, backup and recovery tools) Cloud provider frameworks (e.g., Well-Architected) Working with architecture tools and associated artifacts General skills, behaviors, competencies and experience required includes: Strong communication skills (both written and verbal), including the ability to adapt style to a nontechnical audience Ability to stay calm and focused under pressure Collaborative working Proactive and detail-oriented, strong analytical skills, and the ability to leverage a data-driven approach Willing to share expertise and best practices, including mentoring and coaching others Continuous learning mindset, keen to learn and explore new areas — not afraid of starting from a novice level Ability to present solutions, defend criticism of ideas, and provide constructive peer reviews Ability to build consensus, make decisions based on many variables and gain support for initiatives Business acumen, preferably industry and domain-specific knowledge relevant to the enterprise and its business units Deep understanding of current and emerging I&O, and, in particular, cloud, technologies and practices Achieve compliance requirements by applying technical capabilities, processes and procedures as required Job Requirements Education and Qualifications Essential Bachelor’s or master's degree in computer science, information systems, a related field, or equivalent work experience Ten or more years of related experience in similar roles Must have worked on implementing cloud at enterprise scale Desirable Cloud provider/Hyperscalers certifications preferred. Must Have Skills and Experience Strong problem solving and analytical skills. Strong interpersonal and written and verbal communication skills. Highly adaptable to changing circumstances. Interest in continuously learning new skills and technologies. Experience with programming and scripting languages (e.g. Java, C#, C++, Python, Bash, PowerShell). Experience with incident and response management. Experience with Agile and DevOps development methodologies. Experience with container technologies and supporting tools (e.g. Docker Swarm, Podman, Kubernetes, Mesos). Experience with working in cloud ecosystems (Microsoft Azure AWS, Google Cloud Platform,). Experience with monitoring and observability tools (e.g. Splunk, Cloudwatch, AppDynamics, NewRelic, ELK, Prometheus, OpenTelemetry). Experience with configuration management systems (e.g. Puppet, Ansible, Chef, Salt, Terraform). Experience working with continuous integration/continuous deployment tools (e.g. Git, Teamcity, Jenkin, Artifactory). Experience in GitOps based automation is Plus Qualifications Bachelor’s degree (or equivalent years of experience). 5+ years of relevant work experience. SRE experience preferred. Background in Manufacturing, Platform/Tech compnies is preferred. Must have Public Cloud provider certifications (Azure, GCP or AWS) Having CNCF certification is plus Started sharing status update to Function Owner and CC to Hiring Manager twice a week Approaching Hiring Manager for the status keeping in CC, McCain's HR Head and TA Head Started interacting with Hiring Managers on MS Teams every alternate days McCain Foods is an equal opportunity employer. We see value in ensuring we have a diverse, antiracist, inclusive, merit-based, and equitable workplace. As a global family-owned company we are proud to reflect the diverse communities around the world in which we live and work. We recognize that diversity drives our creativity, resilience, and success and makes our business stronger. McCain is an accessible employer. If you require an accommodation throughout the recruitment process (including alternate formats of materials or accessible meeting rooms), please let us know and we will work with you to meet your needs. Your privacy is important to us. By submitting personal data or information to us, you agree this will be handled in accordance with the Global Employee Privacy Policy Job Family: Information Technology Division: Global Digital Technology Department: Infrastructure Architecture Location(s): IN - India : Haryana : Gurgaon Company: McCain Foods(India) P Ltd

Posted 1 week ago

Apply

5.0 years

0 Lacs

Delhi, India

On-site

Position Title: AI Ops Engineer Position Type: Regular - Full-Time Position Location: New Delhi Grade: Grade 06 Requisition ID: 32845 Job Purpose Design, develop, and implement artificial intelligence (AI) solutions that leverage advanced algorithms and machine learning techniques to solve complex business problems. Work closely with cross-functional teams to understand requirements, gather and analyze data, and build AI models that can automate processes, improve decision-making, and drive innovation. Job Responsibilities Designing, developing, and implementing generative AI models and algorithms utilizing state-of-the-art techniques such as GPT, VAE, and GANs. Collaborating with cross-functional teams to define AI project requirements and objectives, ensuring alignment with overall business goals. Conducting research to stay up-to-date with the latest advancements in generative AI, machine learning, and deep learning techniques and identify opportunities to integrate them into our products and services. Optimizing existing generative AI models for improved performance, scalability, and efficiency. Developing and maintaining AI pipelines, including data preprocessing, feature extraction, model training, and evaluation. Developing clear and concise documentation, including technical specifications, user guides, and presentations, to communicate complex AI concepts to both technical and non-technical stakeholders. Contributing to the establishment of best practices and standards for generative AI development within the organization. Providing technical mentorship and guidance to junior team members. Apply trusted AI practices to ensure fairness, transparency, and accountability in AI models and systems Drive DevOps and MLOps practices, covering continuous integration, deployment, and monitoring of AI Utilize tools such as Docker, Kubernetes, and Git to build and manage AI pipelines Implement monitoring and logging tools to ensure AI model performance and reliability Collaborate seamlessly with software engineering and operations teams for efficient AI model integration and deployment. Familiarity with DevOps and MLOps practices, including continuous integration, deployment, and monitoring of AI models. Key Qualification & Experiences Minimum 5 years of experience in Data Science and Machine Learning In-depth knowledge of machine learning, deep learning, and generative AI techniques Knowledge and experience in Generative AI Proficiency in programming languages such as Python, R, and frameworks like TensorFlow or PyTorch Strong understanding of NLP techniques and frameworks such as BERT, GPT, or Transformer models Familiarity with computer vision techniques for image recognition, object detection, or image generation Experience with cloud platforms such as Azure or AWS Expertise in data engineering, including data curation, cleaning, and preprocessing Knowledge of trusted AI practices, ensuring fairness, transparency, and accountability in AI models and systems Strong collaboration with software engineering and operations teams to ensure seamless integration and deployment of AI models. Excellent problem-solving and analytical skills, with the ability to translate business requirements into technical solutions Strong communication and interpersonal skills, with the ability to collaborate effectively with stakeholders at various levels Track record of driving innovation and staying updated with the latest AI research and advancements You have a degree in Computer Science, Artificial Intelligence, Machine Learning, or a related field. A Master degree is preferred. You have solid experience developing and implementing generative AI models, with a strong understanding of deep learning techniques such as GPT, VAE, and GANs. You are proficient in Python and have experience with machine learning libraries and frameworks such as TensorFlow, PyTorch, or Keras. You have strong knowledge of data structures, algorithms, and software engineering principles. You are familiar with cloud-based platforms and services, such as AWS, GCP, or Azure. You have experience with natural language processing (NLP) techniques and tools, such as SpaCy, NLTK, or Hugging Face. You are familiar with data visualization tools and libraries, such as Matplotlib, Seaborn, or Plotly. You have knowledge of software development methodologies, such as Agile or Scrum. You possess excellent problem-solving skills, with the ability to think critically and creatively to develop innovative AI solutions. You have strong communication skills, with the ability to effectively convey complex technical concepts to a diverse audience. You possess a proactive mindset, with the ability to work independently and collaboratively in a fast-paced, dynamic environment. Collaborate seamlessly with software engineering and operations teams for efficient AI model integration and deployment. Familiarity with DevOps and MLOps practices, including continuous integration, deployment, and monitoring of AI models. McCain Foods is an equal opportunity employer. We see value in ensuring we have a diverse, antiracist, inclusive, merit-based, and equitable workplace. As a global family-owned company we are proud to reflect the diverse communities around the world in which we live and work. We recognize that diversity drives our creativity, resilience, and success and makes our business stronger. McCain is an accessible employer. If you require an accommodation throughout the recruitment process (including alternate formats of materials or accessible meeting rooms), please let us know and we will work with you to meet your needs. Your privacy is important to us. By submitting personal data or information to us, you agree this will be handled in accordance with the Global Employee Privacy Policy Job Family: Information Technology Division: Global Digital Technology Department: Cloud and Data Centre Location(s): IN - India : Haryana : Gurgaon Company: McCain Foods(India) P Ltd

Posted 1 week ago

Apply

2.0 years

0 Lacs

Ahmedabad, Gujarat, India

On-site

Company Description MXICODERS INC is a company where innovation meets opportunity. Our mission is to empower businesses with cutting-edge digital capabilities, enabling growth in a rapidly evolving technological landscape. We specialize in transforming complexity into opportunity, fostering a culture of innovation and adaptability. Join us on this journey to redefine industry standards and turn visionary ideas into reality. Connect with us on LinkedIn to be a part of our community shaping the future. Role Description Data Engineer / AI/ML Minimum Requirements Essential Technical Skills AI/ML (Required) 2+ years hands-on experience with LLM APIs (OpenAI, Anthropic, or similar) Production deployment of at least one AI system that's currently running in production LLM framework experience with LangChain, CrewAI, or AutoGen (any one is sufficient) Function calling/tool use - ability to build AI systems that can call external APIs and functions Basic prompt engineering - understanding of techniques like Chain-of-Thought and ReAct patterns Python Development (Required) 3+ years Python development with strong fundamentals API development using Flask or FastAPI with proper error handling Async programming - understanding of async/await patterns for concurrent operations Database integration - working with PostgreSQL, MySQL, or similar relational databases JSON/REST APIs - consuming and building REST services Production Systems (Required) 2+ years building production software that serves real users Error handling and logging - building robust systems that handle failures gracefully Basic cloud deployment - experience with AWS, Azure, or GCP (any one platform) Git/version control - collaborative development using Git workflows Testing fundamentals - unit testing and integration testing practices Business Process (Basic Required) User requirements - ability to translate business needs into technical solutions Data quality - recognizing and handling dirty/inconsistent data Exception handling - designing workflows for edge cases and errors Professional Experience (Minimum) Software Engineering 3+ years total software development experience 1+ production AI project - any AI/ML system deployed to production (even simple ones) Cross-functional collaboration - worked with non-technical stakeholders Problem-solving - demonstrated ability to debug and resolve complex technical issues Communication & Collaboration Technical documentation - ability to write clear technical docs and code comments Stakeholder communication - explain technical concepts to business users Independent work - ability to work autonomously with minimal supervision Learning agility - quickly pick up new technologies and frameworks Educational Background (Any One) Formal Education Bachelor's degree in Computer Science, Engineering, or related technical field OR equivalent experience - demonstrable technical skills through projects/work Alternative Paths Coding bootcamp + 2+ years professional development experience Self-taught with strong portfolio of production projects Technical certifications (AWS, Google Cloud, etc.) + relevant experience [nice to have] Demonstrable Skills (Portfolio Requirements) Must Show Evidence Of One working AI application - GitHub repo or live demo of LLM integration Python projects - code samples showing API development and data processing Production deployment - any application currently running and serving users Problem-solving ability - examples of debugging complex issues or optimizing performance Nice to Have (Not Required) Financial services or fintech experience Vector databases (Pinecone, Weaviate) experience Docker/containerization knowledge Advanced ML/AI education or certifications Qualifications Strong understanding of Machine Learning, Deep Learning, and Natural Language Processing techniques Experience with programming languages such as Python, R, or similar languages Proficiency in frameworks like TensorFlow, PyTorch, or Keras Data preprocessing and analysis skills Strong problem-solving skills and ability to work in a collaborative environment Excellent written and verbal communication skills Experience in blockchain technology and applications is a plus Master’s or Ph.D. degree in Computer Science, Statistics, Mathematics, or a related field

Posted 1 week ago

Apply

5.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Company Description About Mirantis Mirantis is the Kubernetes-native AI infrastructure company, enabling organizations to build and operate scalable, secure, and sovereign infrastructure for modern AI, machine learning, and data-intensive applications. By combining open source innovation with deep expertise in Kubernetes orchestration, Mirantis empowers platform engineering teams to deliver composable, production-ready developer platforms across any environment—on-premises, in the cloud, at the edge, or in sovereign data centers. As enterprises navigate the growing complexity of AI-driven workloads, Mirantis delivers the automation, GPU orchestration, and policy-driven control needed to manage infrastructure with confidence and agility. Committed to open standards and freedom from lock-in, Mirantis ensures that customers retain full control of their infrastructure strategy. Job Description Mirantis is adding a Pre-Sales Solution Architect to our team. As part of our client-facing technical team, you will leverage your technical and consultative expertise to guide clients from their current state to strategic solutions that deliver measurable, real-world business outcomes. You will work in lockstep with the sales team to communicate Mirantis’ vision, features, and value, and with the services team to ensure that the vision and outcomes are delivered to the client. You will also work with product management, marketing, and engineering teams to understand our product and offerings, and to provide feedback which will define enhancements to our solutions. The Pre-Sales Solution Architect position requires technical thought leadership and offers candidates opportunities for professional growth across a variety of business and technology domains. You will play a key role as an individual leader in the field and contribute internally to an organizational culture that deeply values transparency, performance, development, creativity, collaboration, and trust. Responsibilities Act as a trusted technical advisor and client advocate. Build long-term, value-driven relationships with stakeholders. Understand client goals and architect tailored solutions using Mirantis products and open-source technologies to drive business outcomes. Drive technical wins during sales engagements by demonstrating solution fit, feasibility, and alignment with strategic objectives. Ensure seamless transitions from pre-sales to post-sales with ongoing technical guidance for successful delivery and adoption. Collaborate cross-functionally across Sales, Product, Services, and CTO teams to align on customer qualification, solutioning, and execution. Develop and present custom demos, prototypes, and reference architectures that align with customer use cases. Provide structured field insights and client feedback to support product and services strategy. Stay current with emerging technologies (e.g., Kubernetes, AI/ML) and serve as a knowledge-sharing resource internally and externally. Identify and nurture account and partner growth opportunities, aligning solutions to customer and partner visions. Contribute to strategic account planning and customer success roadmaps to support long-term engagements. Qualifications Required Skills/Abilities Proven experience engaging with senior IT and engineering leadership, including CIOs and CTOs, and aligning technical solutions with strategic priorities. Demonstrated thought leadership through client engagement, public speaking (e.g., conferences, webinars), or community contributions. Strong understanding of the competitive landscape within the cloud infrastructure and open-source ecosystem. Practical knowledge of distributed systems, modern application architectures, software development practices, and DevOps methodologies. Awareness of industry trends, compliance requirements, and regulatory environments, with the ability to assess their impact on client needs. Familiarity with AI/ML infrastructure tools (e.g., Kubeflow, MLflow, NVIDIA AI), and how they integrate with cloud-native platforms. Demonstrated hands-on experience in the following areas: Open-source software stacks Cloud infrastructure including Kubernetes, OpenStack, Docker, microservices, and public cloud services (AWS, GCP, Azure) DevOps and automation tools, including CI/CD pipelines Observability tooling, including open-source monitoring, logging, and alerting systems Experience working effectively in a globally distributed team environment. Qualifications 5+ years of experience in a client-facing technical role such as Sales Engineer, Consultant, or Solutions Architect Four year college degree preferred Excellent written and verbal communication skills, including public speaking and demonstrations Technical certifications (optionally), such as Certified Kubernetes Administrator (CKA), AWS Certified Solutions Architect, or similar industry-recognized credentials. Ability to travel up to 50% Additional Information Why you’ll love Mirantis Work with an established leader in the cloud infrastructure industry. Work with exceptionally passionate, talented and engaging colleagues, helping Fortune 500 and Global 2000 customers implement next-generation cloud technologies. Be a part of cutting-edge, open-source innovation. Thrive in the high-energy environment of a young company where openness, collaboration, risk-taking, and continuous growth are valued. Receive a competitive compensation package with strong benefits plan. We are a Leader for Container Management in G2 (#2 after AWS)! It is understood that Mirantis, Inc. may use automated decision-making technology (ADMT) for specific employment-related decisions. Opting out of ADMT use is requested for decisions about evaluation and review connected with the specific employment decision for the position applied for. You also have the right to appeal any decisions made by ADMT by sending your request to isamoylova@mirantis.com By submitting your resume, you consent to the processing and storage of your personal data in accordance with applicable data protection laws, for the purposes of considering your application for current and future job opportunities. We are a Leader for Container Management in G2 (#2 after AWS)! We are a Leader for Container Management in G2 (#2 after AWS)!

Posted 1 week ago

Apply

2.0 - 7.0 years

4 - 7 Lacs

Hyderabad

Work from Office

We are looking for a skilled Power App Developer with 2 to 7 years of experience to join our team in Bengaluru. The ideal candidate will have expertise in developing and implementing Power Apps solutions. Roles and Responsibility Design, develop, and deploy Power Apps solutions to meet business requirements. Collaborate with stakeholders to gather requirements and provide technical guidance on Power Apps capabilities. Develop custom modules and integrations using Power Apps development tools. Troubleshoot and resolve issues related to Power Apps deployment and performance. Provide training and support to end-users on Power Apps usage. Stay updated with the latest trends and technologies in Power Apps development. Job Requirements Proficient in Power Apps development with at least 2 years of experience. Strong understanding of software development life cycles and agile methodologies. Excellent problem-solving skills and attention to detail. Ability to work collaboratively in a team environment and communicate effectively with stakeholders. Strong analytical and critical thinking skills. Experience working with IT Services & Consulting industry is preferred.

Posted 1 week ago

Apply

5.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Creating Passion: Your Responsibilities Responsibilities Surface Preparation Perform pre-cleaning, degreasing, and removal of contaminants such as oil, grease, dust, rust, and scale using manual or mechanical methods (e.g., shot blasting, wire brushing). Ensure surface profile meets paint manufacturer's requirements (anchor pattern, cleanliness). Primer & Intermediate Coat Application Apply primer coat uniformly ensuring no under-coating or over-spray. Apply intermediate coats as specified in the paint system (e.g., epoxy, zincrich, polyamide-based primers). Allow proper flash-off and curing time between coats. Top Coat Application Apply final topcoat (e.g., polyurethane, alkyd, or epoxy finish) with correct technique (HVLP, Airless spray) ensuring even finish, correct gloss, and required shade. Paint Mixing and Material Handling Mix paints and hardeners in correct ratios as per technical data sheet (TDS). Label and use paint materials before expiry date; follow FIFO. Ensure correct storage of paints, thinners, and solvents as per MSDS. Quality Control Checks Measure WFT (Wet Film Thickness) during application. Measure DFT (Dry Film Thickness) using Elcometer after curing. Record batch numbers, WFT/DFT results, touch-up details in paint logbook or ERP. Perform visual inspection for defects such as pinholes, orange peel, drips, undercoating, or shade mismatch. Process Compliance Follow standard work instructions (WI), SOPs, and job cards for each order. Ensure compliance with ISO 12944, ISO 8501-1, and customer-specific corrosion protection standards. Apply correct masking, protection of machined/critical surfaces before painting. Touch-Up & Repairs Identify areas needing repair post-handling, welding, or transport. Rework based on inspection reports while ensuring blending with original coating. Equipment & Tool Handling Operate spray painting equipment (conventional, airless, or electrostatic). Clean and maintain spray guns, hoses, compressors, and filters. Calibrate DFT meters and mixing scales as per schedule Documentation & Traceability Maintain detailed paint records including batch numbers, paint codes, operator ID, area covered, and environmental conditions. Complete paint inspection reports, ERP entries, and rework records as required. Workplace & 5S Maintenance Maintain organized and safe workplace per 5S and TPM principles. Ensure disposal of waste materials (rags, paint tins, thinner) in line with environmental norms. Communication & Teamwork Communicate issues such as improper surface condition, missing job cards, equipment malfunction, or paint mismatch to supervisor. Collaborate with fabrication, quality, and logistics teams for paint priorities and sequencing. Contributing Your Strengths: Your Qualifications Qualification & Education Requirements: Minimum Qualification- ITI in Painter Trade or Diploma in Surface Coating Technology / Industrial Painting Additional Training- In-plant training or certification in spray painting methods preferred Certifications (Preferred)- NACE Level 1 / FROSIO Level 1 / Equivalent corrosion protection certifications (desirable) Reading & Comprehension- Ability to understand and follow work instructions, safety signs, and paint specifications in English/Hindi/local language Experience: Industry Background- 2–5 years in industrial painting in heavy machinery, automotive, structural steel, or similar sectors Application Process- Hands-on experience in airless or HVLP spray painting, shot blasting, masking, and DFT checks Coating System Exposure- Familiarity with epoxy, PU, alkyd, zinc-rich, and water-based paint systems Quality Involvement- Experience with in-process paint quality inspection and documentation desirable Special Skills / Competencies Knowledge of corrosion protection systems (e.g., ISO 12944-5 classification C3/C4/C5). Accurate paint mixing, thinning, and application technique. Understanding of paint curing times and environmental parameters (temperature, humidity). Proficiency in using DFT meters, Elcometer, gloss meters, and standoff gauges. Knowledge of masking techniques for machined and threaded surfaces. Ability to work in standing, crouching, and overhead positions for long durations. Awareness of explosion-proof tools, ATEX zone rules (for enclosed paint areas). Basic computer skills (for logging job cards or ERP entries) Health, Safety & Environmental (HSE) Compliance Personal Protective Equipment- Must wear respirator masks, gloves, face shields, safety shoes, coveralls, and hearing protection. Safe Work Practices- No open flames or smoking in painting or solvent storage area. Follow lockout/tagout during equipment maintenance. MSDS Compliance- Handle paints, thinners, and chemicals as per MSDS guidelines. Ventilation Standards- Work only in ventilated spray booths with exhaust systems functioning. Waste Disposal- Dispose used solvents, empty containers, and paint sludge in designated bins only Fire Safety Awareness- Must know location and use of fire extinguishers and spill kits in the paint shop. Medical Fitness- Fit to work with respiratory protection and physically demanding tasks Have we awoken your interest? Then we look forward to receiving your online application. If you have any questions, please contact Sonali Samal. One Passion. Many Opportunities. The Company Liebherr CMCtec India Private Limited in Pune (India) was established in 2008 and started its manufacturing plant in its own facility on Pune Solapur Highway in 2012. The company is responsible for the production of tower cranes and drives. Location Liebherr CMCtec India Private Limited Gat No. 196-199, Dhaygudewadi Nh-9 Pune India (IN) Contact Sonali Samal sonali.samal@liebherr.com shweta.Chakrawarti@liebherr.com

Posted 1 week ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies