Get alerts for new jobs matching your selected skills, preferred locations, and experience range.
0 years
0 Lacs
Pune, Maharashtra, India
On-site
About Airtel Digital We are a fun-loving, energetic and fast growing company that breathes innovation. We strive to give an unparalleled experience to our customers and win them for life. One in every 24 people on this planet is served by Airtel. Here, we put our customers at the heart of everything we do. We encourage our people to push boundaries and evolve from skilled professionals of today to risk-taking entrepreneurs of tomorrow. We hire people from every realm and offer them opportunities that encourage individual and professional growth. We are always looking for people who are thinkers & doers; people with passion, curiosity & conviction; people who are eager to break away from conventional roles and do 'jobs never done before. About the Role: As a TechOps Engineer you will troubleshoot, debug, evaluate and resolve customer impacting issues with a focus on detecting patterns and working with the engineering development and or product teams to eliminate defects. The position requires a combination of strong troubleshooting, technical, communication and problem solving skills. This job requires you to constantly hit the ground running and your ability to learn quickly and work on disparate and overlapping tasks will define your success. Key Responsibilities • Deployment of new releases , environments for applications. • Responding to emails and incident tickets, maintaining issue ownership. • Build and maintain highly scalable, large scale deployments globally • Co-Create and maintain architecture for 100% uptime. E.g. creating alternate connectivity. • Practice sustainable incident response/management and blameless post-mortems. • Monitor and maintain production environment stability. • Perform production support activities which involve the assignment of issues and issue analysis and resolution within the specified SLAs. • Coordinate with the Application Development Team to resolve issues on production. • Suggest fixes to complex issues by doing a thorough analysis of root cause and impact of the defect. • Provide daily support with a resolution of escalated tickets and act as a liaison to business and technical leads to ensure issues are resolved in a timely manner. • Technical hands-on troubleshooting, including parsing logs and following stack traces. • Efficiently do multi-tasking where the job holder will have to handle multiple customer requests from various sources. • Identifying and documenting technical problems, ensuring timely resolution. • Prioritize workload, providing timely and accurate resolutions. • Should be highly collaborative with the team, and other stakeholders. Experience and Skills: • Self-motivated, ability to do multitasking efficiently. • Database queries execution experience in any of DB (MySQL,Postgres /Mongo) • Basic Linux OS knowledge • Hands-on experience on Shell/UNIX commands. • Experience in Monitoring tools like Grafana, Logging tool like ELK. • Rest API working experience to execute curl, Analyzing request and response, HTTP codes etc. • Knowledge on Incidents and escalation practices. • Ability to troubleshoot issues and able to handle different types of customer inquiries. Show more Show less
Posted 2 weeks ago
2.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
DESCRIPTION Interested to build the next generation Financial systems that can handle billions of dollars in transactions? Interested to build highly scalable next generation systems that could utilize Amazon Cloud? Massive data volume + complex business rules in a highly distributed and service oriented architecture = a world class information collection and delivery challenge. Our challenge is to deliver the software systems which accurately capture, process, and report on the huge volume of financial transactions that are generated each day as millions of customers make purchases, as thousands of Vendors and Partners are paid, as inventory moves in and out of warehouses, as commissions are calculated, and as taxes are collected in hundreds of jurisdictions worldwide. Key job responsibilities Design, develop, and evaluate highly innovative models for Natural Language Programming (NLP), Large Language Model (LLM), or Large Computer Vision Models. Use Python, Jupyter notebook, and Pytorch to - Use machine learning and analytical techniques to create scalable solutions for business problems. Research and implement novel machine learning and statistical approaches. Mentor interns. Work closely with data & software engineering teams to build model implementations and integrate successful models and algorithms in production systems at very large scale. Basic Qualifications 2+ years of building models for business application experience Experience in patents or publications at top-tier peer-reviewed conferences or journals Experience programming in Java, C++, Python or related language Experience in any of the following areas: algorithms and data structures, parsing, numerical optimization, data mining, parallel and distributed computing, high-performance computing Experience with popular deep learning frameworks such as MxNet and Tensor Flow Preferred Qualifications Experience building machine learning models or developing algorithms for business application Experience in building speech recognition, machine translation and natural language processing systems (e.g., commercial speech products or government speech projects) Experience developing and implementing deep learning algorithms, particularly with respect to computer vision algorithms PhD in computer science, machine learning, engineering, or related fields Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace accommodation or adjustment during the application and hiring process, including support for the interview or onboarding process, please visit https://amazon.jobs/content/en/how-we-hire/accommodations for more information. If the country/region you’re applying in isn’t listed, please contact your Recruiting Partner. Internal Job Description Loop competencies -- Basic Qualifications 3+ years of building models for business application experience PhD, or Master's degree and 4+ years of CS, CE, ML or related field experience Experience in patents or publications at top-tier peer-reviewed conferences or journals Experience programming in Java, C++, Python or related language Experience in any of the following areas: algorithms and data structures, parsing, numerical optimization, data mining, parallel and distributed computing, high-performance computing Preferred Qualifications Experience using Unix/Linux Experience in professional software development Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace accommodation or adjustment during the application and hiring process, including support for the interview or onboarding process, please visit https://amazon.jobs/content/en/how-we-hire/accommodations for more information. If the country/region you’re applying in isn’t listed, please contact your Recruiting Partner. BASIC QUALIFICATIONS 3+ years of building models for business application experience PhD, or Master's degree and 4+ years of CS, CE, ML or related field experience Experience in patents or publications at top-tier peer-reviewed conferences or journals Experience programming in Java, C++, Python or related language Experience in any of the following areas: algorithms and data structures, parsing, numerical optimization, data mining, parallel and distributed computing, high-performance computing PREFERRED QUALIFICATIONS Experience using Unix/Linux Experience in professional software development Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace accommodation or adjustment during the application and hiring process, including support for the interview or onboarding process, please visit https://amazon.jobs/content/en/how-we-hire/accommodations for more information. If the country/region you’re applying in isn’t listed, please contact your Recruiting Partner. Company - ADCI HYD 13 SEZ Job ID: A2956973 Show more Show less
Posted 2 weeks ago
5.0 years
0 - 0 Lacs
India
On-site
Company Introduction: - A dynamic company headquartered in Australia. Multi awards winner, recognized for excellence in telecommunications industry. Financial Times Fastest-growing Company APAC 2023. AFR (Australian Financial Review) Fast 100 Company 2022. Great promotion opportunities that acknowledge and reward your hard work. Young, energetic and innovative team, caring and supportive work environment. About You: We are seeking an experienced and highly skilled Data Warehouse Engineer to join our data and analytics team. Data Warehouse Engineer with an energetic 'can do' attitude to be a part of our dynamic IT team. The ideal candidate will have over 5 years of hands-on experience in designing, building, and maintaining scalable data pipelines and reporting infrastructure. You will be responsible for managing our data warehouse, automating ETL workflows, building dashboards, and enabling data-driven decision-making across the organization. Your Responsibilities will include but is not limited to: • Design, implement, and maintain robust, scalable data pipelines using Apache NiFi, Airflow, or similar ETL tools. Develop and manage efficient data ingestion and transformation workflows, including web data crawling using Python. Create, optimize, and maintain complex SQL queries to support business reporting needs. Build and manage interactive dashboards and visualizations using Apache Superset (preferred), Power BI, or Tableau. Collaborate with business stakeholders and analysts to gather requirements, define KPIs, and deliver meaningful data insights. Ensure data accuracy, completeness, and consistency through rigorous quality assurance processes. Maintain and optimize the performance of the data warehouse, supporting high-availability and fast query response times. Document technical processes and data workflows for maintainability and scalability. To be successful in this role you will ideally possess: 5+ years of experience in data engineering, business intelligence, or a similar role. Strong proficiency in Python, particularly for data crawling, parsing, and automation tasks. Expert in SQL (including complex joins, CTEs, window functions) for reporting and analytics. Hands-on experience with Apache Superset (preferred), or equivalent BI tools like Power BI or Tableau. Proficient with ETL tools such as Apache NiFi, Airflow, or similar data pipeline frameworks. Experience working with cloud-based data warehouse platforms (e.g., Amazon Redshift, Snowflake, BigQuery, or PostgreSQL). Strong understanding of data modeling, warehousing concepts, and performance optimization. Ability to work independently and collaboratively in a fast-paced environment. Preferred Qualifications: Experience with version control (e.g., Git) and CI/CD processes for data workflows. Familiarity with REST APIs and web scraping best practices. Knowledge of data governance, privacy, and security best practices. Background in the telecommunications or ISP industry is a plus. Job Types: Full-time, Permanent Pay: ₹40,000.00 - ₹70,000.00 per month Benefits: Leave encashment Paid sick time Provident Fund Schedule: Day shift Monday to Friday Supplemental Pay: Overtime pay Yearly bonus Work Location: In person
Posted 2 weeks ago
7.0 years
0 - 0 Lacs
Coimbatore
Remote
Sr. Python Developer | 7+ years | Work Timings: 1 PM to 10 PM | Remote Job Description: Core Skill: - Hands on experience with Python Development Key Responsibilities (including, but not limited to): This developer should be proficient in Python programming and possess a strong understanding of data structures, algorithms, and database concepts. They are adept at using relevant Python libraries and frameworks and are comfortable working in a data-driven environment. Responsible for designing, developing, and implementing robust and scalable data parsers, data pipeline solutions and web applications for data visualization. Their core responsibilities include: Data platform related components: Building and maintaining efficient and reliable data pipeline components using Python and related technologies (e.g., Lambda, Airflow). This involves extracting data from various sources, transforming it into usable formats, and loading it into target persistence layers and serving them via API. Data Visualization (Dash Apps): Developing interactive and user-friendly data visualization applications using Plotly Dash. This includes designing dashboards that effectively communicate complex data insights, enabling stakeholders to make data-driven decisions. Data Parsing and Transformation: Implementing data parsing and transformation logic using Python libraries to clean, normalize, and restructure data from diverse formats (e.g., JSON, CSV, XML) into formats suitable for analysis and modeling. Collaboration: Working closely with product leadership and profession services teams to understand product and project requirements, define data solutions, and ensure quality and timely delivery. Software Development Best Practices: Adhering to software development best practices, including version control (Git), testing (unit, integration), and documentation, to ensure maintainable and reliable code. Job Type: Contractual / Temporary Contract length: 6 months Pay: ₹70,000.00 - ₹80,000.00 per month Benefits: Work from home Schedule: Monday to Friday Morning shift UK shift US shift Education: Bachelor's (Preferred) Experience: Python: 7 years (Preferred)
Posted 2 weeks ago
0.0 years
0 Lacs
Delhi, Delhi
On-site
What You'll Do (Key Responsibilities) As a Developer Trainee, you’ll be part of a structured training and hands-on development track designed to build your capability in Zoho Creator and Deluge scripting. Here’s what your role will involve: Zoho Creator Application Development Learn to design and build custom applications using Zoho Creator’s drag-and-drop interface . Create and configure forms, reports, dashboards, and workflows tailored to specific business use cases. Implement best practices in app structuring, form relationships, and user interface optimization. Deluge Scripting and Logic Building Use Deluge scripting to write server-side logic, automate processes, and create dynamic behaviors in apps. Write functions for validations, conditional workflows, API calls, and data transformations. Maintain readable, modular, and reusable code for future scalability. Workflow Automation and Business Rules Build multi-step workflows using Creator's process automation tools (workflow builder, schedules, approvals). Translate client business processes into logical, streamlined automation. Configure notifications, escalations, and reminders based on system or user actions. Integration and API Handling Assist in integrating Zoho Creator apps with other Zoho apps (CRM, Books, Desk, etc.) and third-party platforms using REST APIs. Configure webhooks, custom functions, and connectors for end-to-end data flow and synchronization. Learn OAuth tokens, API authentication, and JSON parsing in a guided setup. Data Modeling and Reports Design efficient database structures with proper form linking and relationship mapping. Create dynamic reports, charts, and dashboards to visualize critical business data. Optimize performance through effective use of filters, formulas, and custom views. Testing, Debugging, and Documentation Test applications across different scenarios and user roles. Identify and debug errors in forms, scripts, or workflows during development and deployment. Document modules, logic flow, known issues, and version changes clearly for internal and client use. Job Type: Full-time Pay: ₹18,000.00 - ₹20,000.00 per month Location Type: In-person Schedule: Day shift Monday to Friday Application Question(s): Do you reside in West Delhi? Please mention your current location. Can you join on immediate basis? Work Location: In person
Posted 2 weeks ago
0 years
0 Lacs
Hyderabad, Telangana, India
Remote
When you join Verizon You want more out of a career. A place to share your ideas freely — even if they’re daring or different. Where the true you can learn, grow, and thrive. At Verizon, we power and empower how people live, work and play by connecting them to what brings them joy. We do what we love — driving innovation, creativity, and impact in the world. Our V Team is a community of people who anticipate, lead, and believe that listening is where learning begins. In crisis and in celebration, we come together — lifting our communities and building trust in how we show up, everywhere & always. Want in? Join the #VTeamLife. What You’ll Be Doing... As a Data Engineer with ETL/ELT expertise for our growing data platform and analytics teams, you will understand and enable the required data sets from different sources. This includes both structured and unstructured data into our data warehouse and data lake with real-time streaming and/or batch processing to generate insights and perform analytics for business teams within Verizon. Understanding the business requirements. Transforming technical design. Working on data ingestion, preparation and transformation. Developing the scripts for data sourcing and parsing. Developing data streaming applications. Debugging the production failures and identifying the solution. Working on ETL/ELT development. What We’re Looking For... You’re curious about new technologies and the game-changing possibilities it creates. You like to stay up-to-date with the latest trends and apply your technical expertise to solve business problems. You'll Need To Have Bachelor’s degree or one or more years of experience. Experience with Data Warehouse concepts and Data Management life cycle. Even better if you have one or more of the following: Any related Certification on ETL/ELT developer. Accuracy and attention to detail. Good problem solving, analytical, and research capabilities. Good verbal and written communication. Experience presenting to and influencing partners. If Verizon and this role sound like a fit for you, we encourage you to apply even if you don’t meet every “even better” qualification listed above. #AI&D Where you’ll be working In this hybrid role, you'll have a defined work location that includes work from home and assigned office days set by your manager. Scheduled Weekly Hours 40 Equal Employment Opportunity Verizon is an equal opportunity employer. We evaluate qualified applicants without regard to race, gender, disability or any other legally protected characteristics. Show more Show less
Posted 2 weeks ago
2.0 - 5.0 years
0 Lacs
Hyderabad, Telangana, India
Remote
Job Description: We are looking for a skilled Technical Trainer with expertise in Zoho’s Deluge scripting language to train and mentor aspiring Zoho developers. The ideal candidate should have 2-5 years of experience in Zoho Creator/CRM development and Deluge scripting . Roles & Responsibilities: Conduct hands-on training sessions on Deluge scripting across Zoho Creator, CRM, and other Zoho applications. Design and deliver structured learning paths, exercises, and capstone projects . Guide learners in developing custom workflows, automations, and integrations using Deluge . Provide ongoing mentorship, code reviews, and support . Evaluate students’ understanding through projects and assignments. Stay updated with new features in Zoho and Deluge . Host webinars, live coding demos , and interactive Q&A sessions. Customize teaching methods to suit beginner and advanced learners . Technology-Specific Responsibilities: Zoho Creator : Teach how to build apps, forms, reports, and automate them with Deluge. Zoho CRM : Instruct on custom modules, buttons, workflows, and scripting for business logic. Deluge Scripting : Guide end-to-end from basics to advanced concepts including integration, loops, maps, etc. API Integration : Train students to consume REST APIs, parse JSON, and trigger webhooks. Best Practices : Emphasize clean code, modular functions, and efficient workflows. Requirements: 2-5 years of experience in Zoho One, Creator, CRM, and Deluge scripting Proficiency in writing workflows, automations , and integrations Solid understanding of REST APIs and JSON parsing Clear communication and mentorship ability Preferred Skills: Experience with Zoho Analytics, Zoho Flow , or Zoho Books Familiarity with OAuth2 authentication in API integrations Exposure to No-code/Low-code platforms Knowledge of Webhook handling and third-party API setup Why Join Us? Opportunity to shape the next generation of Zoho developers. A dynamic and supportive team environment. Remote-friendly with flexible working hours. Competitive pay with growth and leadership paths. Show more Show less
Posted 2 weeks ago
0 years
0 Lacs
Mumbai, Maharashtra, India
On-site
Job Overview We are seeking a highly skilled Python Developer to join our dynamic team. The ideal candidate should have strong expertise in Python and its associated libraries, with experience in web scraping, data handling, and automation. You should be an excellent problem solver with great communication skills and a solid understanding of object-oriented programming and data structures. Key Responsibilities Develop, test, and maintain efficient Python-based desktop applications. Work with pandas for data manipulation and analysis. Write optimized SQL queries for database interactions. Utilize BeautifulSoup and Selenium for web scraping and automation. Handle JSON data efficiently for API integrations and data exchange. Apply object-oriented programming (OOP) principles to software development. Implement data structures and algorithms to optimize performance. Troubleshoot and debug code for functionality and efficiency. Collaborate with cross-functional teams to deliver high-quality solutions. Document processes and write clean, maintainable code. Must-Have Skills ✅ Python – Strong proficiency in Python programming. ✅ Pandas – Experience with data manipulation and analysis. ✅ SQL – Ability to write and optimize queries. ✅ BeautifulSoup – Web scraping and parsing HTML/XML data. ✅ JSON – Handling structured data for APIs and storage. ✅ Selenium – Automation and web testing. ✅ OOP Concepts – Strong understanding of object-oriented principles. ✅ Data Structures & Algorithms – Efficient problem-solving abilities. ✅ Problem-Solving Skills – Ability to tackle complex technical challenges. ✅ Communication Skills – Strong verbal and written communication. Show more Show less
Posted 2 weeks ago
5.0 years
0 Lacs
Pune, Maharashtra, India
On-site
About Gruve Gruve is an innovative software services startup dedicated to transforming enterprises to AI powerhouses. We specialize in cybersecurity, customer experience, cloud infrastructure, and advanced technologies such as Large Language Models (LLMs). Our mission is to assist our customers in their business strategies utilizing their data to make more intelligent decisions. As a well-funded early-stage startup, Gruve offers a dynamic environment with strong customer and partner networks. About The Role We are looking for a highly skilled SIEM Consultant with deep hands-on experience in designing, implementing, and configuring Splunk SIEM solutions. The ideal candidate will be responsible for deploying Splunk into customer environments, onboarding diverse log sources, configuring security use cases, and integrating external tools for end-to-end threat visibility. This role demands strong technical expertise, project delivery experience, and the ability to translate security monitoring requirements into Splunk configurations and dashboards. Key Responsibilities SIEM Design s Implementation Lead the design and deployment of Splunk architecture (single/multi-site, indexer clustering, search head clustering, ). Define data ingestion strategies and architecture best Install, configure, and optimize Splunk components (forwarders, indexers, heavy forwarders, search heads, deployment servers). Set up and manage Splunk deployment servers, apps, and configuration bundles. Log Source Onboarding Identify, prioritize, and onboard critical log sources across IT, cloud, network, security, and application Develop onboarding playbooks for common and custom log Create parsing, indexing, and field extraction logic using conf, transforms.conf, and custom apps. Ensure log data is normalized and categorized according to CIM (Common Information Model). Use Case Development s Configuration Work with SOC teams to define security monitoring requirements and detection Configure security use cases, correlation rules, and alerting within Splunk Enterprise Security (ES) or core Develop dashboards, alerts, and scheduled reports to support threat detection, compliance, and operational Tune and optimize correlation rules to reduce false Tool Integration Integrate Splunk with third-party tools and platforms such as: Ticketing systems (ServiceNow, JIRA) Threat Intelligence Platforms (Anomali) SOAR platforms (Splunk SOAR, Palo Alto XSOAR) Endpoint C Network tools (CrowdStrike, Fortinet, Cisco, ) Develop and manage APIs, scripted inputs, and custom connectors for data ingestion and bidirectional Documentation s Handover Maintain comprehensive documentation for architecture, configurations, onboarding steps, and operational Conduct knowledge transfer and operational training for security Create runbooks, SOPs, and configuration backups for business Prepare HLD and LLD documents for Solution Required Skills s Experience 5+ years of experience in SIEM implementation, with at least 3 years focused on Strong knowledge of Splunk architecture, deployment methods, data onboarding, and advanced search. Experience in building Splunk dashboards, alerts, and use case logic using SPL (Search Processing Language). Familiarity with Common Information Model (CIM) and data normalization Experience integrating Splunk with external tools and writing automation scripts (Python, Bash, ). Preferred Certifications Splunk Core Certified Power User Splunk Certified Admin or Architect Splunk Enterprise Security Certified Admin (preferred) Security certifications like CompTIA Security+, GCIA, or CISSP (optional but beneficial) Why Gruve At Gruve, we foster a culture of innovation, collaboration, and continuous learning. We are committed to building a diverse and inclusive workplace where everyone can thrive and contribute their best work. If you’re passionate about technology and eager to make an impact, we’d love to hear from you. Gruve is an equal opportunity employer. We welcome applicants from all backgrounds and thank all who apply; however, only those selected for an interview will be contacted. Show more Show less
Posted 2 weeks ago
0 years
0 Lacs
Ahmedabad, Gujarat, India
On-site
Key Requirements Strong proficiency in Android (Kotlin/Java) Strong knowledge of OOPs Fundamental Dynamic layout design Deep understanding of MVVM architecture, dependency injection (Dagger/Hilt). Experience in RESTful APIs , JSON parsing, and third-party library Retrofit. Location and Map integration. Proficiency in Firebase , push notifications, and real-time database handling. Knowledge of version control systems such as Git/GitHub/GitLab . Ability to optimize applications for performance and scalability . Experience in writing unit tests and UI tests is a plus. Exposure to Agile development methodologies. Additional Preferences Strong problem-solving skills and debugging capabilities. Experience with CI/CD pipelines for mobile applications. Familiarity with Play Store deployment processes . Show more Show less
Posted 2 weeks ago
7.0 - 10.0 years
0 Lacs
Gurugram, Haryana, India
On-site
Job Title: Lead Splunk Engineer Location: Gurgaon (Hybrid) Experience: 7-10 Years Employment Type: Full-time Notice Period: Immediate Joiners Preferred Job Summary: We are seeking an experienced Lead Splunk Engineer to design, deploy, and optimize SIEM solutions with expertise in Splunk architecture, log management, and security event monitoring . The ideal candidate will have hands-on experience in Linux administration, scripting, and integrating Splunk with tools like ELK & DataDog . Key Responsibilities: ✔ Design & deploy scalable Splunk SIEM solutions (UF, HF, SH, Indexer Clusters). ✔ Optimize log collection, parsing, normalization, and retention . ✔ Ensure license & log optimization for cost efficiency. ✔ Integrate Splunk with 3rd-party tools (ELK, DataDog, etc.) . ✔ Develop automation scripts (Python/Bash/PowerShell) . ✔ Create technical documentation (HLD, LLD, Runbooks) . Skills Required: 🔹 Expert in Splunk (Architecture, Deployment, Troubleshooting) 🔹 Strong SIEM & Log Management Knowledge 🔹 Linux/Unix Administration 🔹 Scripting (Python, Bash, PowerShell) 🔹 Experience with ELK/DataDog 🔹 Understanding of German Data Security Standards (GDPR/Data Parsimony) Why Join Us? Opportunity to work with cutting-edge security tools . Hybrid work model (Gurgaon-based). Collaborative & growth-oriented environment . Show more Show less
Posted 2 weeks ago
3.0 years
0 Lacs
Gurugram, Haryana, India
On-site
Role : Cloud and Observability Engineer Experience : 3-6 Years+ Location : Gurugram To Apply: https://forms.gle/mu8BgX7j5PTKF1Lz5 About the Job Coralogix is a modern, full-stack observability platform transforming how businesses process and understand their data. Our unique architecture powers in-stream analytics without reliance on expensive indexing or hot storage. We specialize in comprehensive monitoring of logs, metrics, trace and security events with features such as APM, RUM, SIEM, Kubernetes monitoring and more, all enhancing operational efficiency and reducing observability spend by up to 70%. Coralogix is rebuilding the path to observability using a real-time streaming analytics pipeline that provides monitoring, visualization, and alerting capabilities without the burden of indexing. By enabling users to define different data pipelines per use case, we provide deep Observability and Security insights, at an infinite scale, for less than half the cost. We are looking for a Customer Success Engineer to join our highly experienced global team. The Customer Success Engineer role embodies the critical intersection of technical expertise and a focus on customer satisfaction. This role is tasked with helping Coralogix customers with giving answers to technical questions, solution architecture, and ensuring successful adoption of the Coralogix Platform. About The Position: Job Summary: As a Cloud and Observability Engineer you will play a critical role in ensuring a smooth transition of customers’ monitoring and observability infrastructure. Your expertise in various other observability tools, coupled with a strong understanding of DevOps, will be essential in successfully migrating alerts and dashboards through creating extension packages and enhancing the customer's monitoring capabilities. You will collaborate with cross-functional teams, understand their requirements, design migration & extension strategies, execute the migration process, and provide training and support throughout the engagement Responsibilities: Extension Delivery: Build & enhance quality extension packages for alerts, dashboards and parsing rules in Coralogix Platform to improve monitoring experience for key services using our platform. This would entail - Research related to building world class extensions including for container technology, services from cloud service providers, etc. Building related Alerts and Dashboards in Coralogix, validating their accuracy & consistency and creating their detailed overviews and documentation Configuring Parsing rules in Coralogix using regex to structure the data as per requirements Building packages as per Coralogix methodology and standards and automating ongoing process using scripting Support internal stakeholders and customers with respect to queries, issues and feedback with respect to deployed extensions Migration Delivery: Help migrate customer alerts, dashboards and parsing rules from leading competitive observability and security platforms to Coralogix Knowledge Management: Build, maintain and evolve documentation with respect to all aspects of extensions and migration Conduct training sessions for internal stakeholders and customer on all aspects of the platform functionality (alerts, dashboards, parsing, querying, etc.), migrations process & techniques and extensions content Collaborate closely with internal stakeholders and customers to understand their specific monitoring needs, gather requirements, and ensure alignment during the extension building process Professional Experience: Minimum 3+ years of experience as a Systems Engineer, DevOps Engineer, or similar roles, with a focus on monitoring, alerting, and observability solutions. Cloud Technology Experience - 2+ yrs of hands-on experience with and understanding of Cloud and Container technologies (GCP/Azure/AWS + K8/EKS/GKE/AKS). Cloud Service Provider DevOps certifications would be a plus Observability Expertise: Good knowledge and hands-on experience with 2 or more Observability platforms, including alert creation, dashboard creation, and infrastructure monitoring.Researching latest industry trends is part of the scope. Deployments & Automation: Good understanding of CI/CD with at least one deployment and version control tool. Engineers would need to package alerts and dashboards as extension packs on an ongoing basis. Grafana & PromQL Proficiency: Basic understanding and practical experience with PromQL, Prometheus's query language, for querying metrics and creating custom dashboards. Person would also need to learn Dataprime and Lucene syntax on the job. Troubleshooting Skills: Excellent problem-solving and debugging skills to diagnose issues, identify root causes, and propose effective solutions. Communication Skills: Strong English verbal and written communication skills to collaborate with the customer's cross-functional teams, deliver training sessions, and create clear technical documentation. Analytical Thinking: Ability to analyze complex systems, identify inefficiencies or gaps, and propose optimized monitoring solutions. Availability: Ability to also work across US and European timezones This is a work from office role Cultural Fit We’re seeking candidates who are hungry, humble, and smart. Coralogix fosters a culture of innovation and continuous learning, where team members are encouraged to challenge the status quo and contribute to our shared mission. If you thrive in dynamic environments and are eager to shape the future of observability solutions, we’d love to hear from you. Coralogix is an equal opportunity employer and encourages applicants from all backgrounds to apply Show more Show less
Posted 2 weeks ago
0.0 - 2.0 years
0 Lacs
Kollam, Kerala
On-site
Amrita Vishwa Vidyapeetham, Bengaluru Campus is inviting applications from qualified candidates for the post of Flutter Devloper. For Details Contact: paikrishnang@am.amrita.edu Job Title Flutter Devloper Location Kollam , Kerala Required Number 2 Job description App Development Develop and maintain cross-platform mobile applications using Flutter and Dart. Build responsive and pixel-perfect UIs based on Figma/Adobe XD/UI designs. Implement new features and functionalities based on project requirements. State Management Use appropriate state management techniques such as BLoC, Provider, Riverpod, or GetX. Maintain scalable and clean state handling across screens and modules. API Integration Integrate RESTful APIs and handle data fetching, parsing, and error handling. Use tools like Dio or HTTP for network calls. Code Quality Write clean, maintainable, and testable Dart code. Follow version control best practices using Git. Testing and Debugging Conduct unit testing and widget testing. Debug and fix performance, UI, and logic issues during development and after release. Build & Deployment Understand how to build, sign, and release Android (APK/AAB) and iOS apps. Collaborate with seniors for publishing apps to the Play Store or App Store. Documentation Maintain proper documentation of code and app architecture. Write README files and API usage notes where applicable. Learning & Improvement Stay updated with Flutter releases and best practices. Actively learn and apply new tools or libraries relevant to the project. Qualification BTech/BCA/MCA/MTech Job category Project Experience 1-2 years Last date to apply June 20, 2025
Posted 2 weeks ago
10.0 years
0 Lacs
Noida, Uttar Pradesh
On-site
Noida,Uttar Pradesh,India Job ID 766940 Join our Team About this opportunity: We are looking for a skilled Telecom Billing Mediation Specialist to manage and optimize the mediation process between network elements and the postpaid billing system. What you will do: Implement rules for data filtering, deduplication, and enrichment before sending to the billing system. Work with network, IT, and billing teams to ensure smooth integration between mediation and billing platforms. Optimize mediation rules to handle high-volume CDR processing efficiently. Perform data reconciliation between network elements, mediation, and billing systems. Investigate and resolve discrepancies in mediation and billing data. Monitor system health, troubleshoot issues, and ensure high availability of mediation services. Conduct root cause analysis (RCA) for mediation-related issues and implement corrective actions. You will bring: Hands-on experience with billing mediation platforms (e.g. Amdocs Mediation, IBM, HP Openet, etc.) Proficiency in SQL, Linux/Unix scripting, and data transformation tools. Familiarity with ETL processes, data parsing, and API integrations. Solid understanding of telecom postpaid billing systems (e.g., Amdocs, HP, Oracle BRM). Knowledge of network elements (MSC, MME, SGSN, GGSN, PCRF, OCS, IN) and their impact on mediation. Awareness of revenue assurance and fraud detection in telecom billing. Key Qualification: Bachelor’s degree in computer science, E.C.E Telecommunications. 10+ years of experience in telecom billing mediation. Experience in cloud-based mediation solutions (AWS, Azure, GCP) is a plus. Knowledge of 5G mediation and real-time charging architectures is an advantage. What happens once you apply?
Posted 2 weeks ago
4.0 years
0 Lacs
Gurugram, Haryana
On-site
- 3+ years of building models for business application experience - PhD, or Master's degree and 4+ years of CS, CE, ML or related field experience - Experience in patents or publications at top-tier peer-reviewed conferences or journals - Experience programming in Java, C++, Python or related language - Experience in any of the following areas: algorithms and data structures, parsing, numerical optimization, data mining, parallel and distributed computing, high-performance computing Interested in building something new? Join the Amazon Autos team on an exhilarating journey to redefine the vehicle shopping experience. This is an opportunity to be part of the Amazon's new business ventures. Our goal is to create innovative automotive discovery and shopping experiences on Amazon, providing customers with greater convenience and a wider selection. You'll work in a creative, fast-paced, and entrepreneurial environment at the center of Amazon's innovation. As a key member, you'll play a pivotal role in helping us achieve our mission. We are looking for a highly accomplished Applied Science professional drive our science strategy, foster a culture of data-driven decision-making, and drive impactful business outcomes through advanced state-of-the-art science methodologies. If you're enthusiastic about innovating and delivering exceptional shopping experiences to customers, thrive on new challenges, and excel at solving complex problems using top-notch ML models, LLM and GenAI techniques, then you're the perfect candidate for this role. Strong business acumen and interpersonal skills are a must, as you'll work closely with business owners to understand customer needs and design scalable solutions. Join us on this exhilarating journey and be part of redefining the vehicle shopping experience. Key job responsibilities As an Applied Scientist in Amazon Autos, you will: - Shape the roadmap and strategy for applying science to solve customer problems in the Amazon AutoStore domain. - Drive big picture innovations with clear roadmaps for intermediate delivery. - Apply your skills in areas such as deep learning and reinforcement learning while building scalable solutions for business problems. - Produce and deliver models that help build best-in-class customer experiences and build systems that allow us to deploy these models to production with low latency and high throughput. - Utilize your Generative AI, time series and predictive modeling skills, and creative problem-solving skills to drive new projects from ideation to implementation. - Interface with business customers, gathering requirements and delivering science solutions. - Collaborate with cross-functional teams, including software engineers, data scientists, and product managers, to define project requirements, establish success metrics, and deliver high-quality solutions. - Effectively communicate complicated machine learning concepts to multiple partners. - Research new and innovative machine learning approaches. A day in the life In this role, you will be part of a multidisciplinary team working on one of Amazon's newest business ventures. As a key member, you will collaborate closely with engineering, product, design, operations, and business development to bring innovative solutions to our customers. Your science expertise will be leveraged to research and deliver novel solutions to existing problems, explore emerging problem spaces, and create new knowledge. You will invent and apply state-of-the-art technologies, such as large language models, machine learning, natural language processing, and computer vision, to build next-generation solutions for Amazon. You'll publish papers, file patents, and work closely with engineers to bring your ideas to production. About the team This is a critical role for Amazon Autos team with a vision to create innovative automotive discovery and shopping experiences on Amazon, providing customers better convenience and more selection. We’re collaborating with other experienced teams at Amazon to define the future of how customers research and shop for cars online. Experience using Unix/Linux Experience in professional software development Experience building complex software systems, especially involving deep learning, machine learning and computer vision, that have been successfully delivered to customers Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace accommodation or adjustment during the application and hiring process, including support for the interview or onboarding process, please visit https://amazon.jobs/content/en/how-we-hire/accommodations for more information. If the country/region you’re applying in isn’t listed, please contact your Recruiting Partner.
Posted 2 weeks ago
3.0 - 7.0 years
0 Lacs
Pune, Maharashtra, India
On-site
Responsibilities As a Web Scraper, your role is to apply your knowledge set to fetch data from multiple online sources Developing highly reliable web Scraper and parsers across various websites Extract structured/unstructured data and store them into SQL/No SQL data store Work closely with Project/Business/Research teams to provide scrapped data for analysis Maintain the scraping projects delivered to production Develop frameworks for automating and maintaining constant flow of data from multiple sources Work independently with minimum supervision Develop a deep understanding of the data sources on the web and know exactly how when, and which data to scrap, parse and store this data Required Skills And Experience Experience as Web Scraper of 3 to 7 years. Proficient knowledge in Python language and working knowledge of Web Crawling/Web scraping in Python Requests, Beautifulsoup or URLlib and Selenium, Playwright. Must possess strong knowledge of basic Linux commands for system navigation, management, and troubleshooting. Must have expertise in proxy usage to ensure secure and efficient network operations. Must have experience with captcha-solving techniques for seamless automation and data extraction. Experience with data parsing - Strong knowledge of Regular expression, HTML, CSS, DOM, XPATH. Knowledge of Javascript would be a plus. Must be able to access, manipulate, and transform data from a variety of database and flat file sources. MongoDB & MYSQL skills are essential. Must possess strong knowledge of basic Linux commands for system navigation, management, and troubleshooting. Must be able to develop reusable code-based scraping products which can be used by others. GIT knowledge is mandatory for version control and collaborative development workflows. Must have experience handling cloud servers on platforms like AWS, GCP, and LEAPSWITCH for scalable and reliable infrastructure management. Ability to ask the right questions and deliver the right results in a way that is understandable and usable to your clients. A track record of digging in to the tough problems, attacking them from different angles, and bringing innovative approaches to bear is highly desirable. Must be capable of selfteaching new techniques. (ref:hirist.tech) Show more Show less
Posted 2 weeks ago
1.0 - 3.0 years
0 Lacs
Pune, Maharashtra, India
On-site
Responsibilities As a Web Scraper, your role is to apply your knowledge set to fetch data from multiple online sources Developing highly reliable web Scraper and parsers across various websites Extract structured/unstructured data and store them into SQL/No SQL data store Work closely with Project/Business/Research teams to provide scrapped data for analysis Maintain the scraping projects delivered to production Develop frameworks for automating and maintaining constant flow of data from multiple sources Work independently with minimum supervision Develop a deep understanding of the data sources on the web and know exactly how, when, and which data to scrap, parse and store this data Required Skills And Experience Experience as Web Scraper of 1 to 3 years. Proficient knowledge in Python language and working knowledge of Web Crawling/Web scraping in Python Requests, Beautifulsoup or URLlib and Selenium, Playwright. Must possess strong knowledge of basic Linux commands for system navigation, management, and troubleshooting. Must have expertise in proxy usage to ensure secure and efficient network operations. Must have experience with captcha-solving techniques for seamless automation and data extraction. Experience with data parsing Strong knowledge of Regular expression, HTML, CSS, DOM, XPATH. Knowledge of Javascript would be a plus Must be able to access, manipulate, and transform data from a variety of database and flat file sources. MongoDB & MYSQL skills are essential. Must possess strong knowledge of basic Linux commands for system navigation, management, and troubleshooting. Must be able to develop reusable code-based scraping products which can be used by others. GIT knowledge is mandatory for version control and collaborative development workflows. Must have experience handling cloud servers on platforms like AWS, GCP, and LEAPSWITCH for scalable and reliable infrastructure management. Ability to ask the right questions and deliver the right results in a way that is understandable and usable to your clients. A track record of digging in to the tough problems, attacking them from different angles, and bringing innovative approaches to bear is highly desirable. (ref:hirist.tech) Show more Show less
Posted 2 weeks ago
3.0 - 6.0 years
12 - 18 Lacs
Pune
Work from Office
Job Description: Were searching for Senior Security Engineer to assist our 247 managed security operations center. This role is in Integration Department, responsible for the strategic, technical, and operational direction of the Integration Team Responsibilities: • IBM QRadar/ Sentinel / Datadog , Integration and content management, Event Collector deployment/upgradation. • Troubleshooting skills at all layers of OSI Model. • Onboard all standard devices to QRadar, such as Windows Security Events, Firewalls, Antivirus, Proxy etc. • Onboard non-standard devices by researching the product and coordinating with different teams. Such as application onboarding or onboarding new security products. • Developing and Deploying connectors and scripts for log collection for cloud-based solutions. • Detailed validation of parsing and normalization of logs before handing over to SOC team will be day to day Job. • Coordinate between customer and internal teams for issues related to log collection. • The engineer needs to make sure that various team have completed their tasks, such as log validation, Log Source Not Reporting (LSNR Automation), Content Management before the Log Source is in production. • Troubleshooting API based log sources. • Documentation of integrations and versioning Essential Skills: • Prior SIEM administration and integration experience ( QRadar , Splunk , Datadog , Azure Sentinel) • Network and Endpoint Device integration and administration . • Knowledge of Device Integration : Log , Flows collection • Knowledge of Regular Expression and scripting language (ex: Bash , Python , PowerShell ), API implementation and development. • Knowledge of Parser creation and maintenance . • Knowledge of Cloud technologies and implementation . • Excellent in verbal and written communication . • Hands on experience in Networking , Security Solutions and Endpoint Administration and operations. Additional Desired Skills: • Excel, formulation • Documentation and presentation • Quick response on issues and mail with prioritization • Ready to work in 24x7 environment Education Requirements & Experience: • BE/B.Tech, BCA • Experience Level: 3+Year
Posted 2 weeks ago
0 years
0 Lacs
Gurugram, Haryana, India
Remote
🚀 Software Engineer Intern (Remote-First | Hybrid Option | Summer 2025) You bring the fundamentals. We’ll hand you the fire. 🧠 About Hirebix Hirebix is not another job board or agency. We’re rebuilding technical hiring from the ground up—with code, not spreadsheets. We’re building an AI-powered recruitment SaaS to help startups filter noise, run async tech rounds, and cut hiring time by 50%. We’re early-stage, product-obsessed, and moving fast. This is your chance to build the core engine of what could power hundreds of tech teams. 💻 Internship Structure This is a remote-first, hybrid-enabled internship designed for: 🧑💻 Students or fresh grads looking for summer internships or first break into product building. 📍 3-month unpaid internship focused on skill-building, mentorship, and contribution. 💼 High-performing interns will be offered a full-time paid role post internship, based on contribution. 🔨 What You’ll Work On No fake tasks. No shadowing. You’ll work directly with our founding engineer(s) to: Architect and build modules of the Hirebix recruitment SaaS (backend-heavy) Integrate AI/LLM features for resume parsing, feedback generation, and interview simulation Write scalable, modular code (Python, Node.js, FastAPI, etc.) 🧠 What We’re Looking For We don’t care about your GPA. We do care if you’ve: ✅ Strong fundamentals in Data Structures, Algorithms, and OOP ✅ Built anything end-to-end (solo or in a team—hackathons count!) ✅ Explored Python, Node.js, or any backend stack ✅ Curiosity to work with AI/ML models and automate things ✅ Hunger to learn, fail fast, and ask better questions Bonus points: You’ve dabbled with Docker, APIs, MongoDB, or FastAPI You’ve tried building your own bots, tools, or scrapers 📍 Logistical Bits Mode : Remote-first (Hybrid available in Gurugram if desired) Duration : 3 Months (May–Aug 2025 preferred) Stipend : Unpaid during internship Post-internship : Performance-based Full-Time Paid Role opportunity Certificate + LOR : Yes, for all sincere contributors Show more Show less
Posted 2 weeks ago
0 years
0 Lacs
Gurugram, Haryana, India
On-site
As the global leader in high-speed connectivity, Ciena is committed to a people-first approach. Our teams enjoy a culture focused on prioritizing a flexible work environment that empowers individual growth, well-being, and belonging. We’re a technology company that leads with our humanity—driving our business priorities alongside meaningful social, community, and societal impact. BPI is built on the Blue Planet Cloud Native Platform (CNP), a modern OSS that converges design, delivery, and assurance software applications to eliminate inefficient operational silos and helps streamline the introduction and delivery of innovative services across network domains and vendor We are looking for a software engineer who will contribute to developing industry-leading dynamic network inventory software. Job Requirements Below Software Development Experience in Java. Extremely competent in Java, with emphasis on Core Java (OOPs concept, Designing skills, Multi-threading, Concurrency, Collection Framework, Exception handling and debug skills), Java Swing, JavaFX, JAXB, XML Parsing techniques, Socket Programming etc. Familiarity with relational and non-relational database concepts. Experience in writing queries on databases like Oracle and Neo4j Familiarity with UI technologies such as Angular. Excellent troubleshooting/debugging skills. Excellent Problem-Solving skills. Strong knowledge of operating systems: Linux, MAC, and Windows Strong commitment to product excellence and quality. Ability to resolve complex issues that may require design trade-offs. Bachelor’s/Master of Engineering in computer science or a related discipline. Excellent written and verbal communication skills, effectively able to collaborate with multiple teams across geographically diverse areas. Not ready to apply? Join our Talent Community to get relevant job alerts straight to your inbox. At Ciena, we are committed to building and fostering an environment in which our employees feel respected, valued, and heard. Ciena values the diversity of its workforce and respects its employees as individuals. We do not tolerate any form of discrimination. Ciena is an Equal Opportunity Employer, including disability and protected veteran status. If contacted in relation to a job opportunity, please advise Ciena of any accommodation measures you may require. Show more Show less
Posted 2 weeks ago
0 years
0 Lacs
Gurugram, Haryana, India
On-site
As the global leader in high-speed connectivity, Ciena is committed to a people-first approach. Our teams enjoy a culture focused on prioritizing a flexible work environment that empowers individual growth, well-being, and belonging. We’re a technology company that leads with our humanity—driving our business priorities alongside meaningful social, community, and societal impact. BPI is built on the Blue Planet Cloud Native Platform (CNP), a modern OSS that converges design, delivery, and assurance software applications to eliminate inefficient operational silos and helps streamline the introduction and delivery of innovative services across network domains and vendor We are looking for a software engineer who will contribute to developing industry-leading dynamic network inventory software. Job Requirements Below Software Development Experience in Java. Extremely competent in Java, with emphasis on Core Java (OOPs concept, Designing skills, Multi-threading, Concurrency, Collection Framework, Exception handling and debug skills), Java Swing, JavaFX, JAXB, XML Parsing techniques, Socket Programming etc. Familiarity with relational and non-relational database concepts. Experience in writing queries on databases like Oracle and Neo4j Familiarity with UI technologies such as Angular. Excellent troubleshooting/debugging skills. Excellent Problem-Solving skills. Strong knowledge of operating systems: Linux, MAC, and Windows Strong commitment to product excellence and quality. Ability to resolve complex issues that may require design trade-offs. Bachelor’s/Master of Engineering in computer science or a related discipline. Excellent written and verbal communication skills, effectively able to collaborate with multiple teams across geographically diverse areas. Not ready to apply? Join our Talent Community to get relevant job alerts straight to your inbox. At Ciena, we are committed to building and fostering an environment in which our employees feel respected, valued, and heard. Ciena values the diversity of its workforce and respects its employees as individuals. We do not tolerate any form of discrimination. Ciena is an Equal Opportunity Employer, including disability and protected veteran status. If contacted in relation to a job opportunity, please advise Ciena of any accommodation measures you may require. Show more Show less
Posted 2 weeks ago
0 years
0 Lacs
Gurugram, Haryana, India
On-site
As the global leader in high-speed connectivity, Ciena is committed to a people-first approach. Our teams enjoy a culture focused on prioritizing a flexible work environment that empowers individual growth, well-being, and belonging. We’re a technology company that leads with our humanity—driving our business priorities alongside meaningful social, community, and societal impact. BPI is built on the Blue Planet Cloud Native Platform (CNP), a modern OSS that converges design, delivery, and assurance software applications to eliminate inefficient operational silos and helps streamline the introduction and delivery of innovative services across network domains and vendor We are looking for a software engineer who will contribute to developing industry-leading dynamic network inventory software. Job Requirements Below Software Development Experience in Java. Extremely competent in Java, with emphasis on Core Java (OOPs concept, Designing skills, Multi-threading, Concurrency, Collection Framework, Exception handling and debug skills), Java Swing, JavaFX, JAXB, XML Parsing techniques, Socket Programming etc. Familiarity with relational and non-relational database concepts. Experience in writing queries on databases like Oracle and Neo4j Familiarity with UI technologies such as Angular. Excellent troubleshooting/debugging skills. Excellent Problem-Solving skills. Strong knowledge of operating systems: Linux, MAC, and Windows Strong commitment to product excellence and quality. Ability to resolve complex issues that may require design trade-offs. Bachelor’s/Master of Engineering in computer science or a related discipline. Excellent written and verbal communication skills, effectively able to collaborate with multiple teams across geographically diverse areas. Not ready to apply? Join our Talent Community to get relevant job alerts straight to your inbox. At Ciena, we are committed to building and fostering an environment in which our employees feel respected, valued, and heard. Ciena values the diversity of its workforce and respects its employees as individuals. We do not tolerate any form of discrimination. Ciena is an Equal Opportunity Employer, including disability and protected veteran status. If contacted in relation to a job opportunity, please advise Ciena of any accommodation measures you may require. Show more Show less
Posted 2 weeks ago
0 years
0 Lacs
Gurugram, Haryana, India
On-site
As the global leader in high-speed connectivity, Ciena is committed to a people-first approach. Our teams enjoy a culture focused on prioritizing a flexible work environment that empowers individual growth, well-being, and belonging. We’re a technology company that leads with our humanity—driving our business priorities alongside meaningful social, community, and societal impact. BPI is built on the Blue Planet Cloud Native Platform (CNP), a modern OSS that converges design, delivery, and assurance software applications to eliminate inefficient operational silos and helps streamline the introduction and delivery of innovative services across network domains and vendor We are looking for a software engineer who will contribute to developing industry-leading dynamic network inventory software. Job Requirements Below Software Development Experience in Java. Extremely competent in Java, with emphasis on Core Java (OOPs concept, Designing skills, Multi-threading, Concurrency, Collection Framework, Exception handling and debug skills), Java Swing, JavaFX, JAXB, XML Parsing techniques, Socket Programming etc. Familiarity with relational and non-relational database concepts. Experience in writing queries on databases like Oracle and Neo4j Familiarity with UI technologies such as Angular. Excellent troubleshooting/debugging skills. Excellent Problem-Solving skills. Strong knowledge of operating systems: Linux, MAC, and Windows Strong commitment to product excellence and quality. Ability to resolve complex issues that may require design trade-offs. Bachelor’s/Master of Engineering in computer science or a related discipline. Excellent written and verbal communication skills, effectively able to collaborate with multiple teams across geographically diverse areas. Not ready to apply? Join our Talent Community to get relevant job alerts straight to your inbox. At Ciena, we are committed to building and fostering an environment in which our employees feel respected, valued, and heard. Ciena values the diversity of its workforce and respects its employees as individuals. We do not tolerate any form of discrimination. Ciena is an Equal Opportunity Employer, including disability and protected veteran status. If contacted in relation to a job opportunity, please advise Ciena of any accommodation measures you may require. Show more Show less
Posted 2 weeks ago
0 years
0 Lacs
Gurugram, Haryana, India
On-site
As the global leader in high-speed connectivity, Ciena is committed to a people-first approach. Our teams enjoy a culture focused on prioritizing a flexible work environment that empowers individual growth, well-being, and belonging. We’re a technology company that leads with our humanity—driving our business priorities alongside meaningful social, community, and societal impact. BPI is built on the Blue Planet Cloud Native Platform (CNP), a modern OSS that converges design, delivery, and assurance software applications to eliminate inefficient operational silos and helps streamline the introduction and delivery of innovative services across network domains and vendor We are looking for a software engineer who will contribute to developing industry-leading dynamic network inventory software. Job Requirements Below Software Development Experience in Java. Extremely competent in Java, with emphasis on Core Java (OOPs concept, Designing skills, Multi-threading, Concurrency, Collection Framework, Exception handling and debug skills), Java Swing, JavaFX, JAXB, XML Parsing techniques, Socket Programming etc. Familiarity with relational and non-relational database concepts. Experience in writing queries on databases like Oracle and Neo4j Familiarity with UI technologies such as Angular. Excellent troubleshooting/debugging skills. Excellent Problem-Solving skills. Strong knowledge of operating systems: Linux, MAC, and Windows Strong commitment to product excellence and quality. Ability to resolve complex issues that may require design trade-offs. Bachelor’s/Master of Engineering in computer science or a related discipline. Excellent written and verbal communication skills, effectively able to collaborate with multiple teams across geographically diverse areas. Not ready to apply? Join our Talent Community to get relevant job alerts straight to your inbox. At Ciena, we are committed to building and fostering an environment in which our employees feel respected, valued, and heard. Ciena values the diversity of its workforce and respects its employees as individuals. We do not tolerate any form of discrimination. Ciena is an Equal Opportunity Employer, including disability and protected veteran status. If contacted in relation to a job opportunity, please advise Ciena of any accommodation measures you may require. Show more Show less
Posted 2 weeks ago
0 years
0 Lacs
Gurugram, Haryana, India
On-site
As the global leader in high-speed connectivity, Ciena is committed to a people-first approach. Our teams enjoy a culture focused on prioritizing a flexible work environment that empowers individual growth, well-being, and belonging. We’re a technology company that leads with our humanity—driving our business priorities alongside meaningful social, community, and societal impact. BPI is built on the Blue Planet Cloud Native Platform (CNP), a modern OSS that converges design, delivery, and assurance software applications to eliminate inefficient operational silos and helps streamline the introduction and delivery of innovative services across network domains and vendor We are looking for a software engineer who will contribute to developing industry-leading dynamic network inventory software. Job Requirements Below Software Development Experience in Java. Extremely competent in Java, with emphasis on Core Java (OOPs concept, Designing skills, Multi-threading, Concurrency, Collection Framework, Exception handling and debug skills), Java Swing, JavaFX, JAXB, XML Parsing techniques, Socket Programming etc. Familiarity with relational and non-relational database concepts. Experience in writing queries on databases like Oracle and Neo4j Familiarity with UI technologies such as Angular. Excellent troubleshooting/debugging skills. Excellent Problem-Solving skills. Strong knowledge of operating systems: Linux, MAC, and Windows Strong commitment to product excellence and quality. Ability to resolve complex issues that may require design trade-offs. Bachelor’s/Master of Engineering in computer science or a related discipline. Excellent written and verbal communication skills, effectively able to collaborate with multiple teams across geographically diverse areas. Not ready to apply? Join our Talent Community to get relevant job alerts straight to your inbox. At Ciena, we are committed to building and fostering an environment in which our employees feel respected, valued, and heard. Ciena values the diversity of its workforce and respects its employees as individuals. We do not tolerate any form of discrimination. Ciena is an Equal Opportunity Employer, including disability and protected veteran status. If contacted in relation to a job opportunity, please advise Ciena of any accommodation measures you may require. Show more Show less
Posted 2 weeks ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
Accenture
36723 Jobs | Dublin
Wipro
11788 Jobs | Bengaluru
EY
8277 Jobs | London
IBM
6362 Jobs | Armonk
Amazon
6322 Jobs | Seattle,WA
Oracle
5543 Jobs | Redwood City
Capgemini
5131 Jobs | Paris,France
Uplers
4724 Jobs | Ahmedabad
Infosys
4329 Jobs | Bangalore,Karnataka
Accenture in India
4290 Jobs | Dublin 2