Home
Jobs
Companies
Resume

356 Parsing Jobs - Page 7

Filter
Filter Interviews
Min: 0 years
Max: 25 years
Min: ₹0
Max: ₹10000000
Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

6.0 - 9.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Linkedin logo

Role: GCP Data Engineer Experience: 6 to 9 Years Location: Chennai Skills: GCP, ETL, Data warehousing, Data ingestion, real-time streaming Job Description: As a Senior Data Engineer with ETL/ELT expertise for our growing data platform and analytics teams, you will understand and enable the required data sets from different sources. You will take both structured and unstructured data into our data warehouse and data lake with real-time streaming and/or batch processing to generate insights and perform analytics for business teams. Understanding the business requirements. Providing transformed technical design. Working on Data Ingestion, Preparation and Transformation. Developing the scripts for Data Sourcing and Parsing. Developing data streaming applications. Debugging the production failures and identifying the solution. Working on ETL/ELT development. What we’re looking for... You’re curious about new technologies and the game-changing possibilities it creates. You like to stay up-to-date with the latest trends and apply your technical expertise to solving business problems. You'll need to have: Bachelor’s degree or four or more years of work experience. Four or more years of relevant work experience. Experience with Data Warehouse concepts and Data Management life cycle. Even better if you have one or more of the following: Any related Certification on ETL/ELT developer. Accuracy and attention to detail. Strong problem solving, analytical, and research capabilities. Strong verbal and written communication skills. Experience presenting to and influencing partners. Show more Show less

Posted 1 week ago

Apply

0.0 - 1.0 years

0 Lacs

Jaipur, Rajasthan

On-site

Indeed logo

Job Title: Python Developer – Web Scraping Specialist Experience: 1+ Years Location: Jaipur (Work from Office) Job Type: Full-Time Job Summary: We are seeking a detail-oriented and skilled Python Developer with expertise in Web Scraping to join our technology team. The ideal candidate will have at least 1 year of experience in Python programming and hands-on knowledge of web scraping tools and techniques. You will be responsible for designing and implementing efficient, scalable web crawlers to extract structured data from various websites and online platforms. Key Responsibilities: Design, build, and maintain web scraping scripts and crawlers using Python . Utilize tools such as BeautifulSoup , Selenium , and Scrapy to extract data from dynamic and static websites. Clean, structure, and store extracted data in usable formats (e.g., CSV, JSON, databases). Handle data parsing, anti-scraping measures, and ensure scraping compliance with website policies. Monitor and troubleshoot scraping tasks for performance and reliability. Collaborate with team members to understand data requirements and deliver accurate, timely results. Optimize scraping scripts for speed, reliability, and error handling. Maintain documentation of scraping processes and codebase. Required Skills: Solid programming skills in Core Python and data manipulation. Strong experience in Web Scraping using BeautifulSoup , Selenium , and Scrapy . Familiarity with HTTP protocols, request headers, cookies, and browser automation. Understanding of HTML, CSS, and XPath for parsing and navigating web content. Ability to handle and solve CAPTCHA and anti-bot mechanisms. Experience with data formats like JSON, XML, and CSV. Knowledge of version control tools like Git. Preferred Qualifications: Bachelor’s degree in Computer Science, IT, or a related field. Experience with task schedulers (e.g., CRON, Celery) for automated scraping. Knowledge of storing data in SQL or NoSQL databases. Familiarity with proxy management and user-agent rotation. Job Type: Full-time Pay: ₹7,000.00 - ₹35,000.00 per month Schedule: Day shift Ability to commute/relocate: Jaipur, Rajasthan: Reliably commute or planning to relocate before starting work (Required) Education: Bachelor's (Required) Experience: Python: 1 year (Required) beautiful soup or scrapy: 1 year (Required) Selenium: 1 year (Preferred) Location: Jaipur, Rajasthan (Preferred)

Posted 1 week ago

Apply

0 years

0 Lacs

Ahmedabad, Gujarat, India

On-site

Linkedin logo

Company Description TechEinHub Technologies Pvt Ltd. www.techein.com On-Site: Ahmadabad, Gujarat Role Description Experienced backend developer with a strong command of Python and deep expertise in integrating and managing syslog for logging, monitoring, and security event tracking. Skilled in building robust backend services, APIs, and microservices using frameworks like FastAPI , Flask , and Django . Specialties: Syslog protocol (RFC 3164/5424) implementation & parsing Centralized logging with Syslog (rsyslog/syslog-ng) & ELK stack Real-time log processing & alerting Python backend architecture and development RESTful APIs, authentication, and authorization Async IO, background workers (Celery/RQ), and performance optimization Working with Linux environments, socket programming, and system-level log handling Tech Stack: Python, FastAPI, Flask, Django, PostgreSQL, Redis, Celery, Docker, Linux, ELK Stack, Syslog-ng, rsyslog, Prometheus/Grafana, Git, CI/CD pipelines Share your CV on : info@techein.com Show more Show less

Posted 1 week ago

Apply

4.0 years

0 Lacs

Bengaluru, Karnataka

On-site

Indeed logo

- 3+ years of building models for business application experience - PhD, or Master's degree and 4+ years of CS, CE, ML or related field experience - Experience programming in Java, C++, Python or related language - Experience in any of the following areas: algorithms and data structures, parsing, numerical optimization, data mining, parallel and distributed computing, high-performance computing The Amazon Photos team is looking for a world-class Applied Scientist to join us and use AI to help customers relive their cherished memories. Our team of scientists have developed algorithms and models that power Amazon Photos features for millions of photos and videos daily. As part of the team, we expect that you will develop innovative solutions to hard problems at massive scale, and publish your findings in at peer reviewed conferences and workshops. With all the recent advancements in Vision-Language models, Amazon Photos has completely re-thought the product roadmap and is looking for Applied Scientists to deliver both the short-term roadmap working closely with Product and Engineering and make investments for the long-term. Our research themes include, but are not limited to: foundational models, contrastive learning, diffusion models, few-shot and zero-shot learning, transfer learning, unsupervised and semi-supervised methods, active learning and semi-automated data annotation, deep learning, and large scale image and video detection and recognition. Key job responsibilities - Collaborate with cross-functional teams of engineers, product managers, and scientists to identify and solve complex problems in Visual-Language Model space - Design and execute experiments to evaluate the performance of different models, and iterate quickly to improve results - Create robust evaluation frameworks for assessing model performance across different domains and use cases - Think big about the Visual-Language Model space over a multi-year horizon, and identify new opportunities to apply these technologies to solve real-world problems within Amazon Photos - Communicate results and insights to both technical and non-technical audiences, including through presentations and written reports About the team Amazon Photos is the one of the main digital products offered to Prime subscribers along with Amazon Music and Amazon Video. Amazon Photos provides unlimited photo storage and 5 GB for videos to Prime members and is a top Prime benefit in multiple marketplaces. AI-driven experiences based on image and video understanding are core to customer delight for the business. These experiences are delivered in our mobile, web and desktop apps, in Fire TV, and integrated into Alexa devices such as Echo Show. We solve real-world problems using AI while being a positive force for good. PhD in computer science, machine learning, engineering, or related fields Experience developing and implementing deep learning algorithms, particularly with respect to computer vision algorithms Material contributions to the CV/ML/AI field as related to image and video processing Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace accommodation or adjustment during the application and hiring process, including support for the interview or onboarding process, please visit https://amazon.jobs/content/en/how-we-hire/accommodations for more information. If the country/region you’re applying in isn’t listed, please contact your Recruiting Partner.

Posted 1 week ago

Apply

2.0 - 12.0 years

0 Lacs

Ahmedabad, Gujarat, India

On-site

Linkedin logo

Job Title : Senior Software Engineer - .Net Integration / Team Lead Location : Ahmedabad, Gujarat, Bangalore, Mumbai, Assam, Chennai Experience : 2-12years Employment Type : Permanent Education : BE/BTech/BCA/MCA Required Skills A candidate from an Energy background is preferred REST SOAP .Net Integration .Net Core C# API MS SQL Role Summary We are seeking an experienced Dot Net Integration Lead to design, develop, and maintain robust integration solutions. You will be responsible for parsing various data formats, developing and consuming REST/SOAP APIs, working with RabbitMQ, and building multithreaded C# applications. This role requires strong expertise in MS-SQL Server and C#.Net/ASP.Net Core, along with the ability to work independently, contribute to design documentation, and provide post-production support. Key Responsibilities Develop and manage integrations involving XML, JSON, CSV, flat files, and Excel data. Design, build, and consume REST/SOAP APIs. Develop RMQ producer/consumer services. Create multitasking/multithreading services in C#. Develop and optimize MS-SQL Server databases (TSQL, Stored Procedures, Views, Triggers, Cursors). Contribute to technical design, user manuals, and provide post-production support. Must-Have Skills Strong proficiency in C#.Net, ASP.Net Core. Extensive experience with MS-SQL Server, including writing complex queries, procedures, views, and triggers. Solid understanding and experience with data formats : XML, JSON, CSV, flat files, Excel (reading and parsing). Proven experience in REST/SOAP API development and consumption. Experience with RabbitMQ (producer/consumer services). Proficiency in multitasking/multithreading service development in C#. Excellent debugging and problem-solving skills. Ability to work independently and manage minimal supervision. Desired Skills Domain knowledge in the Energy & Power sector is a plus. (ref:hirist.tech) Show more Show less

Posted 1 week ago

Apply

5.0 years

0 Lacs

Andhra Pradesh, India

Remote

Linkedin logo

At PwC, our people in infrastructure focus on designing and implementing robust, secure IT systems that support business operations. They enable the smooth functioning of networks, servers, and data centres to optimise performance and minimise downtime. Those in cloud operations at PwC will focus on managing and optimising cloud infrastructure and services to enable seamless operations and high availability for clients. You will be responsible for monitoring, troubleshooting, and implementing industry leading practices for cloud-based systems. Focused on relationships, you are building meaningful client connections, and learning how to manage and inspire others. Navigating increasingly complex situations, you are growing your personal brand, deepening technical expertise and awareness of your strengths. You are expected to anticipate the needs of your teams and clients, and to deliver quality. Embracing increased ambiguity, you are comfortable when the path forward isn’t clear, you ask questions, and you use these moments as opportunities to grow. Skills Examples of the skills, knowledge, and experiences you need to lead and deliver value at this level include but are not limited to: Respond effectively to the diverse perspectives, needs, and feelings of others. Use a broad range of tools, methodologies and techniques to generate new ideas and solve problems. Use critical thinking to break down complex concepts. Understand the broader objectives of your project or role and how your work fits into the overall strategy. Develop a deeper understanding of the business context and how it is changing. Use reflection to develop self awareness, enhance strengths and address development areas. Interpret data to inform insights and recommendations. Uphold and reinforce professional and technical standards (e.g. refer to specific PwC tax and audit guidance), the Firm's code of conduct, and independence requirements. Job Title: Observability Engineer – Senior Associate Location : Bangalore (Hybrid) Department : Managed Services – Core Automation Team Job Overview We are looking for a highly skilled and motivated Observability Engineer at the Associate level to join our Core Automation Team. This role is focused on setting up observability frameworks enterprise applications, Salesforce and SAP systems, with a strong emphasis on log analytics and monitoring tool integration. The ideal candidate will have hands-on experience configuring and implementing observability solutions using platforms like Elastic, Splunk , and ServiceNow ITOM, ensuring optimal visibility across environments. Additionally, hands-on scripting experience to automate and configure these observability platforms will be a critical part of the role. Key Responsibilities Design, implement, and configure observability solutions for systems to ensure performance, availability, and system health. Work on the setup and configuration of log analytics platforms such as Elastic Stack and Splunk for centralized logging, data collection, and analysis. Integrate observability frameworks with other monitoring tools to provide comprehensive visibility into application environments, including transactional and application-level monitoring. Collaborate with cross-functional teams to configure and implement event aggregation and alerting systems to identify potential issues before they impact production environments. Develop and implement automation solutions to streamline observability processes, reducing manual interventions and improving system reliability. Create and configure dashboards, reports, and alerts within log analytics platforms to track system performance, detect anomalies, and provide real-time insights. Ensure proper configuration of monitoring tools in alignment with business and technical requirements for observability. Develop best practices and documentation for observability frameworks, logging standards, and automation processes. Continuously optimize the observability solution for better performance, scalability, and ease of use. Provide technical guidance to other team members and assist in troubleshooting and resolving issues related to observability and system health. Required Skills And Qualifications Minimum of 5 years of experience designing, configuring, and setting up observability solutions for enterprise systems using Datadog & Dynatrace, with a focus on applications and NOT infrastructure. Hands-on experience with log analytics platforms like Elastic Stack, Splunk, or similar tools for log collection, parsing, and visualization. Strong understanding of event aggregation, incident management, and alerting configurations using platforms like ServiceNow ITOM and BigPanda. Familiarity with SAP monitoring tools (e.g., SAP Solution Manager, SAP Cloud ALM) and integration with observability solutions. Experience with monitoring tool configuration and tuning for optimal data collection, analysis, and performance. Solid knowledge of data pipelines, log processing, and dashboards in log analytics tools. Hands-on scripting experience (e.g., Python, Shell) to automate and configure observability platforms and related processes. Strong problem-solving skills and the ability to troubleshoot complex issues within observability and SAP environments. Ability to work in a fast-paced, collaborative team environment with a focus on high-quality deliverables. Desired Skills And Qualifications Experience with configuring and automating using DataDog, ServiceNow ITOM, Prometheus, Grafana, BigPanda and other monitoring/observability tools. Familiarity with cloud-based observability solutions (e.g., AWS CloudWatch, Azure Monitor) and their integration with on-premise systems. Knowledge of automation frameworks and DevOps methodologies. Certifications related to Datadog, Elastic, Splunk, or ITOM will be a plus. Experience Requirements A minimum of 5 years of experience configuring with observability solutions, with hands-on experience in APM, log analytics, event aggregation, Salesforce and SAP monitoring. Proven experience in setting up and configuring observability frameworks, rather than just monitoring. Education Requirements Bachelor’s degree in Information Technology, Computer Science, Engineering, or a related field. Relevant certifications in APM, Elastic, Splunk, or ITOM are a plus. Work Environment Collaborative and dynamic team environment with opportunities to work on innovative projects. Hybrid working model with a base in Bangalore, offering flexibility in working from the office and remotely. Opportunity to work with cross-functional teams and lead the implementation of observability solutions. Show more Show less

Posted 1 week ago

Apply

5.0 years

0 Lacs

Andhra Pradesh, India

Remote

Linkedin logo

At PwC, our people in infrastructure focus on designing and implementing robust, secure IT systems that support business operations. They enable the smooth functioning of networks, servers, and data centres to optimise performance and minimise downtime. Those in cloud operations at PwC will focus on managing and optimising cloud infrastructure and services to enable seamless operations and high availability for clients. You will be responsible for monitoring, troubleshooting, and implementing industry leading practices for cloud-based systems. Focused on relationships, you are building meaningful client connections, and learning how to manage and inspire others. Navigating increasingly complex situations, you are growing your personal brand, deepening technical expertise and awareness of your strengths. You are expected to anticipate the needs of your teams and clients, and to deliver quality. Embracing increased ambiguity, you are comfortable when the path forward isn’t clear, you ask questions, and you use these moments as opportunities to grow. Skills Examples of the skills, knowledge, and experiences you need to lead and deliver value at this level include but are not limited to: Respond effectively to the diverse perspectives, needs, and feelings of others. Use a broad range of tools, methodologies and techniques to generate new ideas and solve problems. Use critical thinking to break down complex concepts. Understand the broader objectives of your project or role and how your work fits into the overall strategy. Develop a deeper understanding of the business context and how it is changing. Use reflection to develop self awareness, enhance strengths and address development areas. Interpret data to inform insights and recommendations. Uphold and reinforce professional and technical standards (e.g. refer to specific PwC tax and audit guidance), the Firm's code of conduct, and independence requirements. Job Title: Observability Engineer – Senior Associate Location : Bangalore (Hybrid) Department : Managed Services – Core Automation Team Job Overview We are looking for a highly skilled and motivated Observability Engineer at the Associate level to join our Core Automation Team. This role is focused on setting up observability frameworks enterprise applications, Salesforce and SAP systems, with a strong emphasis on log analytics and monitoring tool integration. The ideal candidate will have hands-on experience configuring and implementing observability solutions using platforms like Elastic, Splunk , and ServiceNow ITOM, ensuring optimal visibility across environments. Additionally, hands-on scripting experience to automate and configure these observability platforms will be a critical part of the role. Key Responsibilities Design, implement, and configure observability solutions for systems to ensure performance, availability, and system health. Work on the setup and configuration of log analytics platforms such as Elastic Stack and Splunk for centralized logging, data collection, and analysis. Integrate observability frameworks with other monitoring tools to provide comprehensive visibility into application environments, including transactional and application-level monitoring. Collaborate with cross-functional teams to configure and implement event aggregation and alerting systems to identify potential issues before they impact production environments. Develop and implement automation solutions to streamline observability processes, reducing manual interventions and improving system reliability. Create and configure dashboards, reports, and alerts within log analytics platforms to track system performance, detect anomalies, and provide real-time insights. Ensure proper configuration of monitoring tools in alignment with business and technical requirements for observability. Develop best practices and documentation for observability frameworks, logging standards, and automation processes. Continuously optimize the observability solution for better performance, scalability, and ease of use. Provide technical guidance to other team members and assist in troubleshooting and resolving issues related to observability and system health. Required Skills And Qualifications Minimum of 5 years of experience designing, configuring, and setting up observability solutions for enterprise systems using Datadog & Dynatrace, with a focus on applications and NOT infrastructure. Hands-on experience with log analytics platforms like Elastic Stack, Splunk, or similar tools for log collection, parsing, and visualization. Strong understanding of event aggregation, incident management, and alerting configurations using platforms like ServiceNow ITOM and BigPanda. Familiarity with SAP monitoring tools (e.g., SAP Solution Manager, SAP Cloud ALM) and integration with observability solutions. Experience with monitoring tool configuration and tuning for optimal data collection, analysis, and performance. Solid knowledge of data pipelines, log processing, and dashboards in log analytics tools. Hands-on scripting experience (e.g., Python, Shell) to automate and configure observability platforms and related processes. Strong problem-solving skills and the ability to troubleshoot complex issues within observability and SAP environments. Ability to work in a fast-paced, collaborative team environment with a focus on high-quality deliverables. Desired Skills And Qualifications Experience with configuring and automating using DataDog, ServiceNow ITOM, Prometheus, Grafana, BigPanda and other monitoring/observability tools. Familiarity with cloud-based observability solutions (e.g., AWS CloudWatch, Azure Monitor) and their integration with on-premise systems. Knowledge of automation frameworks and DevOps methodologies. Certifications related to Datadog, Elastic, Splunk, or ITOM will be a plus. Experience Requirements A minimum of 5 years of experience configuring with observability solutions, with hands-on experience in APM, log analytics, event aggregation, Salesforce and SAP monitoring. Proven experience in setting up and configuring observability frameworks, rather than just monitoring. Education Requirements Bachelor’s degree in Information Technology, Computer Science, Engineering, or a related field. Relevant certifications in APM, Elastic, Splunk, or ITOM are a plus. Work Environment Collaborative and dynamic team environment with opportunities to work on innovative projects. Hybrid working model with a base in Bangalore, offering flexibility in working from the office and remotely. Opportunity to work with cross-functional teams and lead the implementation of observability solutions. Show more Show less

Posted 1 week ago

Apply

5.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Linkedin logo

Senior QA Engineer, Automation Medicine moves too slow. At Velsera, we are changing that. Velsera was formed in 2023 through the shared vision of Seven Bridges and Pierian, with a mission to accelerate the discovery, development, and delivery of life-changing insights. Velsera provides software and professional services for: AI-powered multimodal data harmonization and analytics for drug discovery and development IVD development, validation, and regulatory approval Clinical NGS interpretation, reporting, and adoption With our headquarters in Boston, MA, we are growing and expanding our teams located in different countries! What will you do? • You write test automation scripts, involving both back-and and front-end components according to our best practices & quality standards. • You build tools, frameworks, and other infrastructure needed to support test automation. • You review and validate application requirements and technical specifications to define test specifications for functional, integration and performance tests. • You search for opportunities to enhance all test case automation related activities. • You work as part of a cross-functional scrum team (“POD”), developing applications and services using agile development methodologies. • You will provide technical guidance to other members of the team on test automation as required. • You collaborate with the development team to analyze the bugs and work closely to address production bugs at the earliest. • You stay up to date with new testing tools and testing strategies, proactively evaluating and improving our testing methodologies for continuous improvement. • Should work collaboratively with the team in writing test-scenarios & their execution (in case the need arise). What do you bring to the table? • You are an outstanding QA automation engineer with +5 years relevant experience in test automation and in our test automation stack. • You have a bachelor’s or master’s degree in computer science. • You have a strong understanding of software development and test automation principles, such as BDD & TDD. You have experience in working in continuous integration and continuous delivery environments. • Excellent problem-solving and analytical skills. • Good communication (written and verbal) skills. • Strong interpersonal skills, the ability to coach other engineers. • Experience in developing software in a regulated environment is a plus. Technical Skills UI Automation with Selenium: Proficient in Selenium WebDriver for automating web-based applications. Strong knowledge of locators and strategies (XPath, CSS Selectors, ID, Class). Experience in handling dynamic web elements and writing optimized XPath/CSS selectors. Familiarity with browser debugging tools and techniques for identifying UI elements. Expertise in handling synchronization issues using waits (Implicit, Explicit, Fluent). Capability to manage browser settings and profiles for different testing needs. API Automation with RestAssured: Proficient in automating REST APIs using RestAssured library. Strong understanding of HTTP methods (GET, POST, PUT, DELETE, etc.). Familiarity with handling request/response payloads in JSON and XML formats. Ability to validate response codes, headers, and payloads. Knowledge of token-based authentication mechanisms (e.g., OAuth, JWT). Experience in chaining API requests and validating dependencies between API responses. Programming with Java: Strong proficiency in Java for scripting and automation. Knowledge of object-oriented programming concepts. Experience with exception handling, collections framework, and Java Streams API. Familiarity with file handling, parsing JSON/XML, and reading/writing to databases. Understanding of multithreading for handling parallel execution. Functional Testing: Ability to design and execute comprehensive test cases for functional verification. Experience with both manual and automated regression testing. Knowledge of boundary value analysis, equivalence partitioning, and other testing techniques. Frameworks and Tools Strong knowledge of testing frameworks like TestNG and JUnit. Familiarity with BDD frameworks like Cucumber for writing Gherkin scenarios. Experience in building and maintaining custom automation frameworks (Data-Driven, Keyword-Driven, or Hybrid). Proficient in build tools like Maven or Gradle for dependency management. Working knowledge of version control systems like Git. CI/CD and Integration Experience in integrating test automation with CI/CD pipelines (e.g., Jenkins, GitHub Actions, or GitLab CI). Ability to set up and configure jobs for running automated tests. Familiarity with generating and publishing test execution reports (e.g., Allure, Extent Reports). Our Core Values People first. We create collaborative and supportive environments by operating with respect and flexibility to promote mental, emotional and physical health. We practice empathy by treating others the way they want to be treated and assuming positive intent. We are proud of our inclusive diverse team and humble ourselves to learn about and build our connection with each other. Patient focused. We act with swift determination without sacrificing our expectations of quality . We are driven by providing exceptional solutions for our customers to positively impact patient lives. Considering what is at stake, we challenge ourselves to develop the best solution, not just the easy one. Integrity. We hold ourselves accountable and strive for transparent communication to build trust amongst ourselves and our customers. We take ownership of our results as we know what we do matters and collectively we will change the healthcare industry. We are thoughtful and intentional with every customer interaction understanding the overall impact on human health. Curious. We ask questions and actively listen in order to learn and continuously improve . We embrace change and the opportunities it presents to make each other better. We strive to be on the cutting edge of science and technology innovation by encouraging creativity. Impactful. We take our social responsibility with the seriousness it deserves and hold ourselves to a high standard. We improve our sustainability by encouraging discussion and taking action as it relates to our natural, social and economic resource footprint. We are devoted to our humanitarian mission and look for new ways to make the world a better place. Velsera is an Equal Opportunity Employer: Velsera is proud to be an equal opportunity employer committed to providing employment opportunity regardless of sex, race, creed, colour, gender, religion, marital status, domestic partner status, age, national origin or ancestry. Show more Show less

Posted 1 week ago

Apply

0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Linkedin logo

Role –Senior Gen AI Engineer Location - Coimbatore Mode of Interview - In Person Job Description Collect and prepare data for training and evaluating multimodal foundation models. This may involve cleaning and processing text data or creating synthetic data. Develop and optimize large-scale language models like GANs (Generative Adversarial Networks) and VAEs (Variational Autoencoders) Work on tasks involving language modeling, text generation, understanding, and contextual comprehension. Regularly review and fine-tune Large Language models to ensure maximum accuracy and relevance for custom datasets. Build and deploy AI applications on cloud platforms – any hyperscaler Azure, GCP or AWS. Integrate AI models with our company's data to enhance and augment existing applications. Role & Responsibility Handle data preprocessing, augmentation, and generation of synthetic data. Design and develop backend services using Python or .NET to support OpenAI-powered solutions (or any other LLM solution) Develop and Maintaining AI Pipelines Work with custom datasets, utilizing techniques like chunking and embeddings, to train and fine-tune models. Integrate Azure cognitive services (or equivalent platform services) to extend functionality and improve AI solutions Collaborate with cross-functional teams to ensure smooth deployment and integration of AI solutions. Ensure the robustness, efficiency, and scalability of AI systems. Stay updated with the latest advancements in AI and machine learning technologies. Skills & Experience Strong foundation in machine learning, deep learning, and computer science. Expertise in generative AI models and techniques (e.g., GANs, VAEs, Transformers). Experience with natural language processing (NLP) and computer vision is a plus. Ability to work independently and as part of a team. Knowledge of advanced programming like Python, and especially AI-centric libraries like TensorFlow, PyTorch, and Keras. This includes the ability to implement and manipulate complex algorithms fundamental to developing generative AI models. Knowledge of Natural language processing (NLP) for text generation projects like text parsing, sentiment analysis, and the use of transformers like GPT (generative pre-trained transformer) models. Experience in Data management, including data pre-processing, augmentation, and generation of synthetic data. This involves cleaning, labeling, and augmenting data to train and improve AI models. Experience in developing and deploying AI models in production environments. Knowledge of cloud services (AWS, Azure, GCP) and understanding of containerization technologies like Docker and orchestration tools like Kubernetes for deploying , managing and scaling AI solutions Should be able to bring new ideas and innovative solutions to our clients Show more Show less

Posted 1 week ago

Apply

3.0 - 6.0 years

0 Lacs

Kanayannur, Kerala, India

Remote

Linkedin logo

At EY, you’ll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture and technology to become the best version of you. And we’re counting on your unique voice and perspective to help EY become even better, too. Join us and build an exceptional experience for yourself, and a better working world for all. Senior (CTM – Threat Detection & Response) KEY Capabilities: Experience in working with Splunk Enterprise, Splunk Enterprise Security & Splunk UEBA Minimum of Splunk Power User Certification Good knowledge in programming or Scripting languages such as Python (preferred), JavaScript (preferred), Bash, PowerShell, Bash, etc. Perform remote and on-site gap assessment of the SIEM solution. Define evaluation criteria & approach based on the Client requirement & scope factoring industry best practices & regulations Conduct interview with stakeholders, review documents (SOPs, Architecture diagrams etc.) Evaluate SIEM based on the defined criteria and prepare audit reports Good experience in providing consulting to customers during the testing, evaluation, pilot, production and training phases to ensure a successful deployment. Understand customer requirements and recommend best practices for SIEM solutions. Offer consultative advice in security principles and best practices related to SIEM operations Design and document a SIEM solution to meet the customer needs Experience in onboarding data into Splunk from various sources including unsupported (in-house built) by creating custom parsers Verification of data of log sources in the SIEM, following the Common Information Model (CIM) Experience in parsing and masking of data prior to ingestion in SIEM Provide support for the data collection, processing, analysis and operational reporting systems including planning, installation, configuration, testing, troubleshooting and problem resolution Assist clients to fully optimize the SIEM system capabilities as well as the audit and logging features of the event log sources Assist client with technical guidance to configure end log sources (in-scope) to be integrated to the SIEM Experience in handling big data integration via Splunk Expertise in SIEM content development which includes developing process for automated security event monitoring and alerting along with corresponding event response plans for systems Hands-on experience in development and customization of Splunk Apps & Add-Ons Builds advanced visualizations (Interactive Drilldown, Glass tables etc.) Build and integrate contextual data into notable events Experience in creating use cases under Cyber kill chain and MITRE attack framework Capability in developing advanced dashboards (with CSS, JavaScript, HTML, XML) and reports that can provide near real time visibility into the performance of client applications. Experience in installation, configuration and usage of premium Splunk Apps and Add-ons such as ES App, UEBA, ITSI etc Sound knowledge in configuration of Alerts and Reports. Good exposure in automatic lookup, data models and creating complex SPL queries. Create, modify and tune the SIEM rules to adjust the specifications of alerts and incidents to meet client requirement Work with the client SPOC to for correlation rule tuning (as per use case management life cycle), incident classification and prioritization recommendations Experience in creating custom commands, custom alert action, adaptive response actions etc. Qualification & experience: Minimum of 3 to 6 years’ experience with a depth of network architecture knowledge that will translate over to deploying and integrating a complicated security intelligence solution into global enterprise environments. Strong oral, written and listening skills are an essential component to effective consulting. Strong background in network administration. Ability to work at all layers of the OSI models, including being able to explain communication at any level is necessary. Must have knowledge of Vulnerability Management, Windows and Linux basics including installations, Windows Domains, trusts, GPOs, server roles, Windows security policies, user administration, Linux security and troubleshooting. Good to have below mentioned experience with designing and implementation of Splunk with a focus on IT Operations, Application Analytics, User Experience, Application Performance and Security Management Multiple cluster deployments & management experience as per Vendor guidelines and industry best practices Troubleshoot Splunk platform and application issues, escalate the issue and work with Splunk support to resolve issues Certification in any one of the SIEM Solution such as IBM QRadar, Exabeam, Securonix will be an added advantage Certifications in a core security related discipline will be an added advantage. EY | Building a better working world EY exists to build a better working world, helping to create long-term value for clients, people and society and build trust in the capital markets. Enabled by data and technology, diverse EY teams in over 150 countries provide trust through assurance and help clients grow, transform and operate. Working across assurance, consulting, law, strategy, tax and transactions, EY teams ask better questions to find new answers for the complex issues facing our world today. Show more Show less

Posted 1 week ago

Apply

3.0 - 6.0 years

0 Lacs

Trivandrum, Kerala, India

Remote

Linkedin logo

At EY, you’ll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture and technology to become the best version of you. And we’re counting on your unique voice and perspective to help EY become even better, too. Join us and build an exceptional experience for yourself, and a better working world for all. Senior (CTM – Threat Detection & Response) KEY Capabilities: Experience in working with Splunk Enterprise, Splunk Enterprise Security & Splunk UEBA Minimum of Splunk Power User Certification Good knowledge in programming or Scripting languages such as Python (preferred), JavaScript (preferred), Bash, PowerShell, Bash, etc. Perform remote and on-site gap assessment of the SIEM solution. Define evaluation criteria & approach based on the Client requirement & scope factoring industry best practices & regulations Conduct interview with stakeholders, review documents (SOPs, Architecture diagrams etc.) Evaluate SIEM based on the defined criteria and prepare audit reports Good experience in providing consulting to customers during the testing, evaluation, pilot, production and training phases to ensure a successful deployment. Understand customer requirements and recommend best practices for SIEM solutions. Offer consultative advice in security principles and best practices related to SIEM operations Design and document a SIEM solution to meet the customer needs Experience in onboarding data into Splunk from various sources including unsupported (in-house built) by creating custom parsers Verification of data of log sources in the SIEM, following the Common Information Model (CIM) Experience in parsing and masking of data prior to ingestion in SIEM Provide support for the data collection, processing, analysis and operational reporting systems including planning, installation, configuration, testing, troubleshooting and problem resolution Assist clients to fully optimize the SIEM system capabilities as well as the audit and logging features of the event log sources Assist client with technical guidance to configure end log sources (in-scope) to be integrated to the SIEM Experience in handling big data integration via Splunk Expertise in SIEM content development which includes developing process for automated security event monitoring and alerting along with corresponding event response plans for systems Hands-on experience in development and customization of Splunk Apps & Add-Ons Builds advanced visualizations (Interactive Drilldown, Glass tables etc.) Build and integrate contextual data into notable events Experience in creating use cases under Cyber kill chain and MITRE attack framework Capability in developing advanced dashboards (with CSS, JavaScript, HTML, XML) and reports that can provide near real time visibility into the performance of client applications. Experience in installation, configuration and usage of premium Splunk Apps and Add-ons such as ES App, UEBA, ITSI etc Sound knowledge in configuration of Alerts and Reports. Good exposure in automatic lookup, data models and creating complex SPL queries. Create, modify and tune the SIEM rules to adjust the specifications of alerts and incidents to meet client requirement Work with the client SPOC to for correlation rule tuning (as per use case management life cycle), incident classification and prioritization recommendations Experience in creating custom commands, custom alert action, adaptive response actions etc. Qualification & experience: Minimum of 3 to 6 years’ experience with a depth of network architecture knowledge that will translate over to deploying and integrating a complicated security intelligence solution into global enterprise environments. Strong oral, written and listening skills are an essential component to effective consulting. Strong background in network administration. Ability to work at all layers of the OSI models, including being able to explain communication at any level is necessary. Must have knowledge of Vulnerability Management, Windows and Linux basics including installations, Windows Domains, trusts, GPOs, server roles, Windows security policies, user administration, Linux security and troubleshooting. Good to have below mentioned experience with designing and implementation of Splunk with a focus on IT Operations, Application Analytics, User Experience, Application Performance and Security Management Multiple cluster deployments & management experience as per Vendor guidelines and industry best practices Troubleshoot Splunk platform and application issues, escalate the issue and work with Splunk support to resolve issues Certification in any one of the SIEM Solution such as IBM QRadar, Exabeam, Securonix will be an added advantage Certifications in a core security related discipline will be an added advantage. EY | Building a better working world EY exists to build a better working world, helping to create long-term value for clients, people and society and build trust in the capital markets. Enabled by data and technology, diverse EY teams in over 150 countries provide trust through assurance and help clients grow, transform and operate. Working across assurance, consulting, law, strategy, tax and transactions, EY teams ask better questions to find new answers for the complex issues facing our world today. Show more Show less

Posted 1 week ago

Apply

0 years

0 Lacs

Pune, Maharashtra, India

On-site

Linkedin logo

About Airtel Digital We are a fun-loving, energetic and fast growing company that breathes innovation. We strive to give an unparalleled experience to our customers and win them for life. One in every 24 people on this planet is served by Airtel. Here, we put our customers at the heart of everything we do. We encourage our people to push boundaries and evolve from skilled professionals of today to risk-taking entrepreneurs of tomorrow. We hire people from every realm and offer them opportunities that encourage individual and professional growth. We are always looking for people who are thinkers & doers; people with passion, curiosity & conviction; people who are eager to break away from conventional roles and do 'jobs never done before. About the Role: As a TechOps Engineer you will troubleshoot, debug, evaluate and resolve customer impacting issues with a focus on detecting patterns and working with the engineering development and or product teams to eliminate defects. The position requires a combination of strong troubleshooting, technical, communication and problem solving skills. This job requires you to constantly hit the ground running and your ability to learn quickly and work on disparate and overlapping tasks will define your success. Key Responsibilities • Deployment of new releases , environments for applications. • Responding to emails and incident tickets, maintaining issue ownership. • Build and maintain highly scalable, large scale deployments globally • Co-Create and maintain architecture for 100% uptime. E.g. creating alternate connectivity. • Practice sustainable incident response/management and blameless post-mortems. • Monitor and maintain production environment stability. • Perform production support activities which involve the assignment of issues and issue analysis and resolution within the specified SLAs. • Coordinate with the Application Development Team to resolve issues on production. • Suggest fixes to complex issues by doing a thorough analysis of root cause and impact of the defect. • Provide daily support with a resolution of escalated tickets and act as a liaison to business and technical leads to ensure issues are resolved in a timely manner. • Technical hands-on troubleshooting, including parsing logs and following stack traces. • Efficiently do multi-tasking where the job holder will have to handle multiple customer requests from various sources. • Identifying and documenting technical problems, ensuring timely resolution. • Prioritize workload, providing timely and accurate resolutions. • Should be highly collaborative with the team, and other stakeholders. Experience and Skills: • Self-motivated, ability to do multitasking efficiently. • Database queries execution experience in any of DB (MySQL,Postgres /Mongo) • Basic Linux OS knowledge • Hands-on experience on Shell/UNIX commands. • Experience in Monitoring tools like Grafana, Logging tool like ELK. • Rest API working experience to execute curl, Analyzing request and response, HTTP codes etc. • Knowledge on Incidents and escalation practices. • Ability to troubleshoot issues and able to handle different types of customer inquiries. Show more Show less

Posted 1 week ago

Apply

2.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

DESCRIPTION Interested to build the next generation Financial systems that can handle billions of dollars in transactions? Interested to build highly scalable next generation systems that could utilize Amazon Cloud? Massive data volume + complex business rules in a highly distributed and service oriented architecture = a world class information collection and delivery challenge. Our challenge is to deliver the software systems which accurately capture, process, and report on the huge volume of financial transactions that are generated each day as millions of customers make purchases, as thousands of Vendors and Partners are paid, as inventory moves in and out of warehouses, as commissions are calculated, and as taxes are collected in hundreds of jurisdictions worldwide. Key job responsibilities Design, develop, and evaluate highly innovative models for Natural Language Programming (NLP), Large Language Model (LLM), or Large Computer Vision Models. Use Python, Jupyter notebook, and Pytorch to - Use machine learning and analytical techniques to create scalable solutions for business problems. Research and implement novel machine learning and statistical approaches. Mentor interns. Work closely with data & software engineering teams to build model implementations and integrate successful models and algorithms in production systems at very large scale. Basic Qualifications 2+ years of building models for business application experience Experience in patents or publications at top-tier peer-reviewed conferences or journals Experience programming in Java, C++, Python or related language Experience in any of the following areas: algorithms and data structures, parsing, numerical optimization, data mining, parallel and distributed computing, high-performance computing Experience with popular deep learning frameworks such as MxNet and Tensor Flow Preferred Qualifications Experience building machine learning models or developing algorithms for business application Experience in building speech recognition, machine translation and natural language processing systems (e.g., commercial speech products or government speech projects) Experience developing and implementing deep learning algorithms, particularly with respect to computer vision algorithms PhD in computer science, machine learning, engineering, or related fields Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace accommodation or adjustment during the application and hiring process, including support for the interview or onboarding process, please visit https://amazon.jobs/content/en/how-we-hire/accommodations for more information. If the country/region you’re applying in isn’t listed, please contact your Recruiting Partner. Internal Job Description Loop competencies -- Basic Qualifications 3+ years of building models for business application experience PhD, or Master's degree and 4+ years of CS, CE, ML or related field experience Experience in patents or publications at top-tier peer-reviewed conferences or journals Experience programming in Java, C++, Python or related language Experience in any of the following areas: algorithms and data structures, parsing, numerical optimization, data mining, parallel and distributed computing, high-performance computing Preferred Qualifications Experience using Unix/Linux Experience in professional software development Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace accommodation or adjustment during the application and hiring process, including support for the interview or onboarding process, please visit https://amazon.jobs/content/en/how-we-hire/accommodations for more information. If the country/region you’re applying in isn’t listed, please contact your Recruiting Partner. BASIC QUALIFICATIONS 3+ years of building models for business application experience PhD, or Master's degree and 4+ years of CS, CE, ML or related field experience Experience in patents or publications at top-tier peer-reviewed conferences or journals Experience programming in Java, C++, Python or related language Experience in any of the following areas: algorithms and data structures, parsing, numerical optimization, data mining, parallel and distributed computing, high-performance computing PREFERRED QUALIFICATIONS Experience using Unix/Linux Experience in professional software development Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace accommodation or adjustment during the application and hiring process, including support for the interview or onboarding process, please visit https://amazon.jobs/content/en/how-we-hire/accommodations for more information. If the country/region you’re applying in isn’t listed, please contact your Recruiting Partner. Company - ADCI HYD 13 SEZ Job ID: A2956973 Show more Show less

Posted 1 week ago

Apply

5.0 years

0 - 0 Lacs

India

On-site

Company Introduction: - A dynamic company headquartered in Australia. Multi awards winner, recognized for excellence in telecommunications industry. Financial Times Fastest-growing Company APAC 2023. AFR (Australian Financial Review) Fast 100 Company 2022. Great promotion opportunities that acknowledge and reward your hard work. Young, energetic and innovative team, caring and supportive work environment. About You: We are seeking an experienced and highly skilled Data Warehouse Engineer to join our data and analytics team. Data Warehouse Engineer with an energetic 'can do' attitude to be a part of our dynamic IT team. The ideal candidate will have over 5 years of hands-on experience in designing, building, and maintaining scalable data pipelines and reporting infrastructure. You will be responsible for managing our data warehouse, automating ETL workflows, building dashboards, and enabling data-driven decision-making across the organization. Your Responsibilities will include but is not limited to: • Design, implement, and maintain robust, scalable data pipelines using Apache NiFi, Airflow, or similar ETL tools. Develop and manage efficient data ingestion and transformation workflows, including web data crawling using Python. Create, optimize, and maintain complex SQL queries to support business reporting needs. Build and manage interactive dashboards and visualizations using Apache Superset (preferred), Power BI, or Tableau. Collaborate with business stakeholders and analysts to gather requirements, define KPIs, and deliver meaningful data insights. Ensure data accuracy, completeness, and consistency through rigorous quality assurance processes. Maintain and optimize the performance of the data warehouse, supporting high-availability and fast query response times. Document technical processes and data workflows for maintainability and scalability. To be successful in this role you will ideally possess: 5+ years of experience in data engineering, business intelligence, or a similar role. Strong proficiency in Python, particularly for data crawling, parsing, and automation tasks. Expert in SQL (including complex joins, CTEs, window functions) for reporting and analytics. Hands-on experience with Apache Superset (preferred), or equivalent BI tools like Power BI or Tableau. Proficient with ETL tools such as Apache NiFi, Airflow, or similar data pipeline frameworks. Experience working with cloud-based data warehouse platforms (e.g., Amazon Redshift, Snowflake, BigQuery, or PostgreSQL). Strong understanding of data modeling, warehousing concepts, and performance optimization. Ability to work independently and collaboratively in a fast-paced environment. Preferred Qualifications: Experience with version control (e.g., Git) and CI/CD processes for data workflows. Familiarity with REST APIs and web scraping best practices. Knowledge of data governance, privacy, and security best practices. Background in the telecommunications or ISP industry is a plus. Job Types: Full-time, Permanent Pay: ₹40,000.00 - ₹70,000.00 per month Benefits: Leave encashment Paid sick time Provident Fund Schedule: Day shift Monday to Friday Supplemental Pay: Overtime pay Yearly bonus Work Location: In person

Posted 1 week ago

Apply

7.0 years

0 - 0 Lacs

Coimbatore

Remote

Sr. Python Developer | 7+ years | Work Timings: 1 PM to 10 PM | Remote Job Description: Core Skill: - Hands on experience with Python Development Key Responsibilities (including, but not limited to): This developer should be proficient in Python programming and possess a strong understanding of data structures, algorithms, and database concepts. They are adept at using relevant Python libraries and frameworks and are comfortable working in a data-driven environment. Responsible for designing, developing, and implementing robust and scalable data parsers, data pipeline solutions and web applications for data visualization. Their core responsibilities include: Data platform related components: Building and maintaining efficient and reliable data pipeline components using Python and related technologies (e.g., Lambda, Airflow). This involves extracting data from various sources, transforming it into usable formats, and loading it into target persistence layers and serving them via API. Data Visualization (Dash Apps): Developing interactive and user-friendly data visualization applications using Plotly Dash. This includes designing dashboards that effectively communicate complex data insights, enabling stakeholders to make data-driven decisions. Data Parsing and Transformation: Implementing data parsing and transformation logic using Python libraries to clean, normalize, and restructure data from diverse formats (e.g., JSON, CSV, XML) into formats suitable for analysis and modeling. Collaboration: Working closely with product leadership and profession services teams to understand product and project requirements, define data solutions, and ensure quality and timely delivery. Software Development Best Practices: Adhering to software development best practices, including version control (Git), testing (unit, integration), and documentation, to ensure maintainable and reliable code. Job Type: Contractual / Temporary Contract length: 6 months Pay: ₹70,000.00 - ₹80,000.00 per month Benefits: Work from home Schedule: Monday to Friday Morning shift UK shift US shift Education: Bachelor's (Preferred) Experience: Python: 7 years (Preferred)

Posted 1 week ago

Apply

0.0 years

0 Lacs

Delhi, Delhi

On-site

Indeed logo

What You'll Do (Key Responsibilities) As a Developer Trainee, you’ll be part of a structured training and hands-on development track designed to build your capability in Zoho Creator and Deluge scripting. Here’s what your role will involve: Zoho Creator Application Development Learn to design and build custom applications using Zoho Creator’s drag-and-drop interface . Create and configure forms, reports, dashboards, and workflows tailored to specific business use cases. Implement best practices in app structuring, form relationships, and user interface optimization. Deluge Scripting and Logic Building Use Deluge scripting to write server-side logic, automate processes, and create dynamic behaviors in apps. Write functions for validations, conditional workflows, API calls, and data transformations. Maintain readable, modular, and reusable code for future scalability. Workflow Automation and Business Rules Build multi-step workflows using Creator's process automation tools (workflow builder, schedules, approvals). Translate client business processes into logical, streamlined automation. Configure notifications, escalations, and reminders based on system or user actions. Integration and API Handling Assist in integrating Zoho Creator apps with other Zoho apps (CRM, Books, Desk, etc.) and third-party platforms using REST APIs. Configure webhooks, custom functions, and connectors for end-to-end data flow and synchronization. Learn OAuth tokens, API authentication, and JSON parsing in a guided setup. Data Modeling and Reports Design efficient database structures with proper form linking and relationship mapping. Create dynamic reports, charts, and dashboards to visualize critical business data. Optimize performance through effective use of filters, formulas, and custom views. Testing, Debugging, and Documentation Test applications across different scenarios and user roles. Identify and debug errors in forms, scripts, or workflows during development and deployment. Document modules, logic flow, known issues, and version changes clearly for internal and client use. Job Type: Full-time Pay: ₹18,000.00 - ₹20,000.00 per month Location Type: In-person Schedule: Day shift Monday to Friday Application Question(s): Do you reside in West Delhi? Please mention your current location. Can you join on immediate basis? Work Location: In person

Posted 1 week ago

Apply

0 years

0 Lacs

Hyderabad, Telangana, India

Remote

Linkedin logo

When you join Verizon You want more out of a career. A place to share your ideas freely — even if they’re daring or different. Where the true you can learn, grow, and thrive. At Verizon, we power and empower how people live, work and play by connecting them to what brings them joy. We do what we love — driving innovation, creativity, and impact in the world. Our V Team is a community of people who anticipate, lead, and believe that listening is where learning begins. In crisis and in celebration, we come together — lifting our communities and building trust in how we show up, everywhere & always. Want in? Join the #VTeamLife. What You’ll Be Doing... As a Data Engineer with ETL/ELT expertise for our growing data platform and analytics teams, you will understand and enable the required data sets from different sources. This includes both structured and unstructured data into our data warehouse and data lake with real-time streaming and/or batch processing to generate insights and perform analytics for business teams within Verizon. Understanding the business requirements. Transforming technical design. Working on data ingestion, preparation and transformation. Developing the scripts for data sourcing and parsing. Developing data streaming applications. Debugging the production failures and identifying the solution. Working on ETL/ELT development. What We’re Looking For... You’re curious about new technologies and the game-changing possibilities it creates. You like to stay up-to-date with the latest trends and apply your technical expertise to solve business problems. You'll Need To Have Bachelor’s degree or one or more years of experience. Experience with Data Warehouse concepts and Data Management life cycle. Even better if you have one or more of the following: Any related Certification on ETL/ELT developer. Accuracy and attention to detail. Good problem solving, analytical, and research capabilities. Good verbal and written communication. Experience presenting to and influencing partners. If Verizon and this role sound like a fit for you, we encourage you to apply even if you don’t meet every “even better” qualification listed above. #AI&D Where you’ll be working In this hybrid role, you'll have a defined work location that includes work from home and assigned office days set by your manager. Scheduled Weekly Hours 40 Equal Employment Opportunity Verizon is an equal opportunity employer. We evaluate qualified applicants without regard to race, gender, disability or any other legally protected characteristics. Show more Show less

Posted 1 week ago

Apply

2.0 - 5.0 years

0 Lacs

Hyderabad, Telangana, India

Remote

Linkedin logo

Job Description: We are looking for a skilled Technical Trainer with expertise in Zoho’s Deluge scripting language to train and mentor aspiring Zoho developers. The ideal candidate should have 2-5 years of experience in Zoho Creator/CRM development and Deluge scripting . Roles & Responsibilities: Conduct hands-on training sessions on Deluge scripting across Zoho Creator, CRM, and other Zoho applications. Design and deliver structured learning paths, exercises, and capstone projects . Guide learners in developing custom workflows, automations, and integrations using Deluge . Provide ongoing mentorship, code reviews, and support . Evaluate students’ understanding through projects and assignments. Stay updated with new features in Zoho and Deluge . Host webinars, live coding demos , and interactive Q&A sessions. Customize teaching methods to suit beginner and advanced learners . Technology-Specific Responsibilities: Zoho Creator : Teach how to build apps, forms, reports, and automate them with Deluge. Zoho CRM : Instruct on custom modules, buttons, workflows, and scripting for business logic. Deluge Scripting : Guide end-to-end from basics to advanced concepts including integration, loops, maps, etc. API Integration : Train students to consume REST APIs, parse JSON, and trigger webhooks. Best Practices : Emphasize clean code, modular functions, and efficient workflows. Requirements: 2-5 years of experience in Zoho One, Creator, CRM, and Deluge scripting Proficiency in writing workflows, automations , and integrations Solid understanding of REST APIs and JSON parsing Clear communication and mentorship ability Preferred Skills: Experience with Zoho Analytics, Zoho Flow , or Zoho Books Familiarity with OAuth2 authentication in API integrations Exposure to No-code/Low-code platforms Knowledge of Webhook handling and third-party API setup Why Join Us? Opportunity to shape the next generation of Zoho developers. A dynamic and supportive team environment. Remote-friendly with flexible working hours. Competitive pay with growth and leadership paths. Show more Show less

Posted 1 week ago

Apply

0 years

0 Lacs

Mumbai, Maharashtra, India

On-site

Linkedin logo

Job Overview We are seeking a highly skilled Python Developer to join our dynamic team. The ideal candidate should have strong expertise in Python and its associated libraries, with experience in web scraping, data handling, and automation. You should be an excellent problem solver with great communication skills and a solid understanding of object-oriented programming and data structures. Key Responsibilities Develop, test, and maintain efficient Python-based desktop applications. Work with pandas for data manipulation and analysis. Write optimized SQL queries for database interactions. Utilize BeautifulSoup and Selenium for web scraping and automation. Handle JSON data efficiently for API integrations and data exchange. Apply object-oriented programming (OOP) principles to software development. Implement data structures and algorithms to optimize performance. Troubleshoot and debug code for functionality and efficiency. Collaborate with cross-functional teams to deliver high-quality solutions. Document processes and write clean, maintainable code. Must-Have Skills ✅ Python – Strong proficiency in Python programming. ✅ Pandas – Experience with data manipulation and analysis. ✅ SQL – Ability to write and optimize queries. ✅ BeautifulSoup – Web scraping and parsing HTML/XML data. ✅ JSON – Handling structured data for APIs and storage. ✅ Selenium – Automation and web testing. ✅ OOP Concepts – Strong understanding of object-oriented principles. ✅ Data Structures & Algorithms – Efficient problem-solving abilities. ✅ Problem-Solving Skills – Ability to tackle complex technical challenges. ✅ Communication Skills – Strong verbal and written communication. Show more Show less

Posted 1 week ago

Apply

5.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Linkedin logo

About Gruve Gruve is an innovative software services startup dedicated to transforming enterprises to AI powerhouses. We specialize in cybersecurity, customer experience, cloud infrastructure, and advanced technologies such as Large Language Models (LLMs). Our mission is to assist our customers in their business strategies utilizing their data to make more intelligent decisions. As a well-funded early-stage startup, Gruve offers a dynamic environment with strong customer and partner networks. About The Role We are looking for a highly skilled SIEM Consultant with deep hands-on experience in designing, implementing, and configuring Splunk SIEM solutions. The ideal candidate will be responsible for deploying Splunk into customer environments, onboarding diverse log sources, configuring security use cases, and integrating external tools for end-to-end threat visibility. This role demands strong technical expertise, project delivery experience, and the ability to translate security monitoring requirements into Splunk configurations and dashboards. Key Responsibilities SIEM Design s Implementation Lead the design and deployment of Splunk architecture (single/multi-site, indexer clustering, search head clustering, ). Define data ingestion strategies and architecture best Install, configure, and optimize Splunk components (forwarders, indexers, heavy forwarders, search heads, deployment servers). Set up and manage Splunk deployment servers, apps, and configuration bundles. Log Source Onboarding Identify, prioritize, and onboard critical log sources across IT, cloud, network, security, and application Develop onboarding playbooks for common and custom log Create parsing, indexing, and field extraction logic using conf, transforms.conf, and custom apps. Ensure log data is normalized and categorized according to CIM (Common Information Model). Use Case Development s Configuration Work with SOC teams to define security monitoring requirements and detection Configure security use cases, correlation rules, and alerting within Splunk Enterprise Security (ES) or core Develop dashboards, alerts, and scheduled reports to support threat detection, compliance, and operational Tune and optimize correlation rules to reduce false Tool Integration Integrate Splunk with third-party tools and platforms such as: Ticketing systems (ServiceNow, JIRA) Threat Intelligence Platforms (Anomali) SOAR platforms (Splunk SOAR, Palo Alto XSOAR) Endpoint C Network tools (CrowdStrike, Fortinet, Cisco, ) Develop and manage APIs, scripted inputs, and custom connectors for data ingestion and bidirectional Documentation s Handover Maintain comprehensive documentation for architecture, configurations, onboarding steps, and operational Conduct knowledge transfer and operational training for security Create runbooks, SOPs, and configuration backups for business Prepare HLD and LLD documents for Solution Required Skills s Experience 5+ years of experience in SIEM implementation, with at least 3 years focused on Strong knowledge of Splunk architecture, deployment methods, data onboarding, and advanced search. Experience in building Splunk dashboards, alerts, and use case logic using SPL (Search Processing Language). Familiarity with Common Information Model (CIM) and data normalization Experience integrating Splunk with external tools and writing automation scripts (Python, Bash, ). Preferred Certifications Splunk Core Certified Power User Splunk Certified Admin or Architect Splunk Enterprise Security Certified Admin (preferred) Security certifications like CompTIA Security+, GCIA, or CISSP (optional but beneficial) Why Gruve At Gruve, we foster a culture of innovation, collaboration, and continuous learning. We are committed to building a diverse and inclusive workplace where everyone can thrive and contribute their best work. If you’re passionate about technology and eager to make an impact, we’d love to hear from you. Gruve is an equal opportunity employer. We welcome applicants from all backgrounds and thank all who apply; however, only those selected for an interview will be contacted. Show more Show less

Posted 1 week ago

Apply

0 years

0 Lacs

Ahmedabad, Gujarat, India

On-site

Linkedin logo

Key Requirements Strong proficiency in Android (Kotlin/Java) Strong knowledge of OOPs Fundamental Dynamic layout design Deep understanding of MVVM architecture, dependency injection (Dagger/Hilt). Experience in RESTful APIs , JSON parsing, and third-party library Retrofit. Location and Map integration. Proficiency in Firebase , push notifications, and real-time database handling. Knowledge of version control systems such as Git/GitHub/GitLab . Ability to optimize applications for performance and scalability . Experience in writing unit tests and UI tests is a plus. Exposure to Agile development methodologies. Additional Preferences Strong problem-solving skills and debugging capabilities. Experience with CI/CD pipelines for mobile applications. Familiarity with Play Store deployment processes . Show more Show less

Posted 2 weeks ago

Apply

7.0 - 10.0 years

0 Lacs

Gurugram, Haryana, India

On-site

Linkedin logo

Job Title: Lead Splunk Engineer Location: Gurgaon (Hybrid) Experience: 7-10 Years Employment Type: Full-time Notice Period: Immediate Joiners Preferred Job Summary: We are seeking an experienced Lead Splunk Engineer to design, deploy, and optimize SIEM solutions with expertise in Splunk architecture, log management, and security event monitoring . The ideal candidate will have hands-on experience in Linux administration, scripting, and integrating Splunk with tools like ELK & DataDog . Key Responsibilities: ✔ Design & deploy scalable Splunk SIEM solutions (UF, HF, SH, Indexer Clusters). ✔ Optimize log collection, parsing, normalization, and retention . ✔ Ensure license & log optimization for cost efficiency. ✔ Integrate Splunk with 3rd-party tools (ELK, DataDog, etc.) . ✔ Develop automation scripts (Python/Bash/PowerShell) . ✔ Create technical documentation (HLD, LLD, Runbooks) . Skills Required: 🔹 Expert in Splunk (Architecture, Deployment, Troubleshooting) 🔹 Strong SIEM & Log Management Knowledge 🔹 Linux/Unix Administration 🔹 Scripting (Python, Bash, PowerShell) 🔹 Experience with ELK/DataDog 🔹 Understanding of German Data Security Standards (GDPR/Data Parsimony) Why Join Us? Opportunity to work with cutting-edge security tools . Hybrid work model (Gurgaon-based). Collaborative & growth-oriented environment . Show more Show less

Posted 2 weeks ago

Apply

3.0 years

0 Lacs

Gurugram, Haryana, India

On-site

Linkedin logo

Role : Cloud and Observability Engineer Experience : 3-6 Years+ Location : Gurugram To Apply: https://forms.gle/mu8BgX7j5PTKF1Lz5 About the Job Coralogix is a modern, full-stack observability platform transforming how businesses process and understand their data. Our unique architecture powers in-stream analytics without reliance on expensive indexing or hot storage. We specialize in comprehensive monitoring of logs, metrics, trace and security events with features such as APM, RUM, SIEM, Kubernetes monitoring and more, all enhancing operational efficiency and reducing observability spend by up to 70%. Coralogix is rebuilding the path to observability using a real-time streaming analytics pipeline that provides monitoring, visualization, and alerting capabilities without the burden of indexing. By enabling users to define different data pipelines per use case, we provide deep Observability and Security insights, at an infinite scale, for less than half the cost. We are looking for a Customer Success Engineer to join our highly experienced global team. The Customer Success Engineer role embodies the critical intersection of technical expertise and a focus on customer satisfaction. This role is tasked with helping Coralogix customers with giving answers to technical questions, solution architecture, and ensuring successful adoption of the Coralogix Platform. About The Position: Job Summary: As a Cloud and Observability Engineer you will play a critical role in ensuring a smooth transition of customers’ monitoring and observability infrastructure. Your expertise in various other observability tools, coupled with a strong understanding of DevOps, will be essential in successfully migrating alerts and dashboards through creating extension packages and enhancing the customer's monitoring capabilities. You will collaborate with cross-functional teams, understand their requirements, design migration & extension strategies, execute the migration process, and provide training and support throughout the engagement Responsibilities: Extension Delivery: Build & enhance quality extension packages for alerts, dashboards and parsing rules in Coralogix Platform to improve monitoring experience for key services using our platform. This would entail - Research related to building world class extensions including for container technology, services from cloud service providers, etc. Building related Alerts and Dashboards in Coralogix, validating their accuracy & consistency and creating their detailed overviews and documentation Configuring Parsing rules in Coralogix using regex to structure the data as per requirements Building packages as per Coralogix methodology and standards and automating ongoing process using scripting Support internal stakeholders and customers with respect to queries, issues and feedback with respect to deployed extensions Migration Delivery: Help migrate customer alerts, dashboards and parsing rules from leading competitive observability and security platforms to Coralogix Knowledge Management: Build, maintain and evolve documentation with respect to all aspects of extensions and migration Conduct training sessions for internal stakeholders and customer on all aspects of the platform functionality (alerts, dashboards, parsing, querying, etc.), migrations process & techniques and extensions content Collaborate closely with internal stakeholders and customers to understand their specific monitoring needs, gather requirements, and ensure alignment during the extension building process Professional Experience: Minimum 3+ years of experience as a Systems Engineer, DevOps Engineer, or similar roles, with a focus on monitoring, alerting, and observability solutions. Cloud Technology Experience - 2+ yrs of hands-on experience with and understanding of Cloud and Container technologies (GCP/Azure/AWS + K8/EKS/GKE/AKS). Cloud Service Provider DevOps certifications would be a plus Observability Expertise: Good knowledge and hands-on experience with 2 or more Observability platforms, including alert creation, dashboard creation, and infrastructure monitoring.Researching latest industry trends is part of the scope. Deployments & Automation: Good understanding of CI/CD with at least one deployment and version control tool. Engineers would need to package alerts and dashboards as extension packs on an ongoing basis. Grafana & PromQL Proficiency: Basic understanding and practical experience with PromQL, Prometheus's query language, for querying metrics and creating custom dashboards. Person would also need to learn Dataprime and Lucene syntax on the job. Troubleshooting Skills: Excellent problem-solving and debugging skills to diagnose issues, identify root causes, and propose effective solutions. Communication Skills: Strong English verbal and written communication skills to collaborate with the customer's cross-functional teams, deliver training sessions, and create clear technical documentation. Analytical Thinking: Ability to analyze complex systems, identify inefficiencies or gaps, and propose optimized monitoring solutions. Availability: Ability to also work across US and European timezones This is a work from office role Cultural Fit We’re seeking candidates who are hungry, humble, and smart. Coralogix fosters a culture of innovation and continuous learning, where team members are encouraged to challenge the status quo and contribute to our shared mission. If you thrive in dynamic environments and are eager to shape the future of observability solutions, we’d love to hear from you. Coralogix is an equal opportunity employer and encourages applicants from all backgrounds to apply Show more Show less

Posted 2 weeks ago

Apply

0.0 - 2.0 years

0 Lacs

Kollam, Kerala

On-site

Indeed logo

Amrita Vishwa Vidyapeetham, Bengaluru Campus is inviting applications from qualified candidates for the post of Flutter Devloper. For Details Contact: paikrishnang@am.amrita.edu Job Title Flutter Devloper Location Kollam , Kerala Required Number 2 Job description App Development Develop and maintain cross-platform mobile applications using Flutter and Dart. Build responsive and pixel-perfect UIs based on Figma/Adobe XD/UI designs. Implement new features and functionalities based on project requirements. State Management Use appropriate state management techniques such as BLoC, Provider, Riverpod, or GetX. Maintain scalable and clean state handling across screens and modules. API Integration Integrate RESTful APIs and handle data fetching, parsing, and error handling. Use tools like Dio or HTTP for network calls. Code Quality Write clean, maintainable, and testable Dart code. Follow version control best practices using Git. Testing and Debugging Conduct unit testing and widget testing. Debug and fix performance, UI, and logic issues during development and after release. Build & Deployment Understand how to build, sign, and release Android (APK/AAB) and iOS apps. Collaborate with seniors for publishing apps to the Play Store or App Store. Documentation Maintain proper documentation of code and app architecture. Write README files and API usage notes where applicable. Learning & Improvement Stay updated with Flutter releases and best practices. Actively learn and apply new tools or libraries relevant to the project. Qualification BTech/BCA/MCA/MTech Job category Project Experience 1-2 years Last date to apply June 20, 2025

Posted 2 weeks ago

Apply

10.0 years

0 Lacs

Noida, Uttar Pradesh

On-site

Indeed logo

Noida,Uttar Pradesh,India Job ID 766940 Join our Team About this opportunity: We are looking for a skilled Telecom Billing Mediation Specialist to manage and optimize the mediation process between network elements and the postpaid billing system. What you will do: Implement rules for data filtering, deduplication, and enrichment before sending to the billing system. Work with network, IT, and billing teams to ensure smooth integration between mediation and billing platforms. Optimize mediation rules to handle high-volume CDR processing efficiently. Perform data reconciliation between network elements, mediation, and billing systems. Investigate and resolve discrepancies in mediation and billing data. Monitor system health, troubleshoot issues, and ensure high availability of mediation services. Conduct root cause analysis (RCA) for mediation-related issues and implement corrective actions. You will bring: Hands-on experience with billing mediation platforms (e.g. Amdocs Mediation, IBM, HP Openet, etc.) Proficiency in SQL, Linux/Unix scripting, and data transformation tools. Familiarity with ETL processes, data parsing, and API integrations. Solid understanding of telecom postpaid billing systems (e.g., Amdocs, HP, Oracle BRM). Knowledge of network elements (MSC, MME, SGSN, GGSN, PCRF, OCS, IN) and their impact on mediation. Awareness of revenue assurance and fraud detection in telecom billing. Key Qualification: Bachelor’s degree in computer science, E.C.E Telecommunications. 10+ years of experience in telecom billing mediation. Experience in cloud-based mediation solutions (AWS, Azure, GCP) is a plus. Knowledge of 5G mediation and real-time charging architectures is an advantage. What happens once you apply?

Posted 2 weeks ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies