Jobs
Interviews

3627 Querying Jobs - Page 2

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

4.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

JOB_POSTING-3-73069-1 Job Description Role Title: AVP, Detection Operations (L10) Company Overview Synchrony (NYSE: SYF) is a premier consumer financial services company delivering one of the industry’s most complete digitally enabled product suites. Our experience, expertise and scale encompass a broad spectrum of industries including digital, health and wellness, retail, telecommunications, home, auto, outdoors, pet and more. We have recently been ranked #2 among India’s Best Companies to Work for by Great Place to Work. We were among the Top 50 India’s Best Workplaces in Building a Culture of Innovation by All by GPTW and Top 25 among Best Workplaces in BFSI by GPTW. We have also been recognized by AmbitionBox Employee Choice Awards among the Top 20 Mid-Sized Companies, ranked #3 among Top Rated Companies for Women, and Top-Rated Financial Services Companies. Synchrony celebrates ~52% women talent. We offer Flexibility and Choice for all employees and provide best-in-class employee benefits and programs that cater to work-life integration and overall well-being. We provide career advancement and upskilling opportunities, focusing on Advancing Diverse Talent to take up leadership roles. Organizational Overview This role resides in the Security Automation and Detection Operations organization which is global team responsible for streamlining security events through automation and optimizing detection development. This team partners closely with Technical Intel, JSOC, and other partners to develop and deliver high fidelity security alerting to protect Synchrony from cyber threats. Role Summary/Purpose The AVP, Detection Operations candidate is responsible for managing and optimizing the Splunk ES (Enterprise Security) to enhance security operations and threat detection. Responsibilities include managing Splunk Mission Control for improving incident response workflows. Key duties include building and maintaining Splunk ES data models, assets, and identities to improve analytics, entity correlation, and security posture. The role requires developing and managing alert actions to automate and optimize threat detection and response processes. Additionally, this position involves leveraging CDLC pipelines to facilitate detection logic integration. Automated validation of logs and detection logic is also essential to ensure accuracy and reliability in threat detection and response. In this role, a combination of technical expertise in Splunk ES, security operations, and automation is required to enhance system performance, ensure timely response to security incidents, and drive efficient security analytics. Key Responsibilities Splunk Mission Control: Develop and manage Splunk Mission Control to enhance incident response capabilities and streamline security operations. CDLC Pipelines/ Detection as Code: Employ CDLC pipelines to expedite and integrate detection logic across systems. Automated Validation: Develop automated validation mechanisms for critical logs and detection logic, ensuring high accuracy and reliability in threat detection. Required Skills/Knowledge Bachelors degree with 4+ years of experience with Information Security along with Splunk ES and in lieu of degree with 6+ years of experience required. 4 years of Splunk ES Administration: Expertly manage the overall administration of Splunk ES, ensuring optimal performance, scalability, and reliability of the system. 4 years of Splunk Search Processing Language (SPL): Proficiently utilize Splunk SPL for querying, analyzing, and visualizing data to inform timely security decisions. 4 years of Data Models: Build, manage, and effectively leverage Splunk ES data models to enhance data analytics, security insights, and detection logic. Assets & Identities: Construct and manage comprehensive Splunk ES assets and identities, ensuring accurate security posture and entity correlation. Alert Actions: Develop, manage, and leverage Splunk ES alert actions to automate and optimize threat detection and response processes. Programming Expertise: Utilize Python and HTTP client programming to integrate and automate security solutions efficiently. Desired Skills/Knowledge Previous experience in working with or in SOC and Incident Response programs Experienced working in organizations that leverage agile methodologies. Experience working in cloud environments (AWS/Azure). Eligibility Criteria Bachelors degree with 4+ years of experience with Information Security along with Splunk ES and in lieu of degree with 6+ years of experience required. Work Timings : 3pm to 12am IST This role qualifies for Enhanced Flexibility and Choice offered in Synchrony India and will require the incumbent to be available between 06:00 AM Eastern Time – 11:30 AM Eastern Time (timings are anchored to US Eastern hours and will adjust twice a year locally). This window is for meetings with India and US teams. The remaining hours will be flexible for the employee to choose. Exceptions may apply periodically due to business needs. Please discuss this with the hiring manager for more details . For Internal Applicants Understand the criteria or mandatory skills required for the role, before applying Inform your manager and HRM before applying for any role on Workday Ensure that your professional profile is updated (fields such as education, prior experience, other skills) and it is mandatory to upload your updated resume (Word or PDF format) Must not be any corrective action plan (First Formal/Final Formal, LPP) L8+ Employees who have completed 18 months in the organization and 12 months in current role and level are only eligible L08+ Employees can apply Grade/Level:10 Job Family Group Information Technology

Posted 1 day ago

Apply

2.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Job Description You’re ready to gain the skills and experience needed to grow within your role and advance your career — and we have the perfect software engineering opportunity for you. As a Software Engineer II at JPMorgan Chase within the Corporate Data & Analytics Services which is aligned to Corporate Technology Division, you are part of an agile team that works to enhance, design, and deliver the software components of the firm’s state-of-the-art technology products in a secure, stable, and scalable way. As an emerging member of a software engineering team, you execute software solutions through the design, development, and technical troubleshooting of multiple components within a technical product, application, or system, while gaining the skills and experience needed to grow within your role. Job Responsibilities Executes standard software solutions, design, development, and technical troubleshooting Writes secure and high-quality code using the syntax of at least one programming language with limited guidance Designs, develops, codes, and troubleshoots with consideration of upstream and downstream systems and technical implications Applies knowledge of tools within the Software Development Life Cycle toolchain to improve the value realized by automation Applies technical troubleshooting to break down solutions and solve technical problems of basic complexity Gathers, analyzes, and draws conclusions from large, diverse data sets to identify problems and contribute to decision-making in service of secure, stable application development Learns and applies system processes, methodologies, and skills for the development of secure, stable code and systems Adds to team culture of diversity, opportunity, inclusion, and respect Required Qualifications, Capabilities, And Skills Formal training or certification on software engineering concepts and 2+ years applied experience Hands-on practical experience in system design, application development, testing, and operational stability Experience in developing, debugging, and maintaining code in a large corporate environment with one or more modern programming languages and database querying languages Demonstrable ability to code in one or more languages Experience across the whole Software Development Life Cycle Exposure to agile methodologies such as CI/CD, Application Resiliency, and Security Emerging knowledge of software applications and technical processes within a technical discipline (e.g., cloud, artificial intelligence, machine learning, mobile, etc.) Preferred Qualifications, Capabilities, And Skills Familiarity with modern front-end technologies Exposure to cloud technologies ABOUT US JPMorganChase, one of the oldest financial institutions, offers innovative financial solutions to millions of consumers, small businesses and many of the world’s most prominent corporate, institutional and government clients under the J.P. Morgan and Chase brands. Our history spans over 200 years and today we are a leader in investment banking, consumer and small business banking, commercial banking, financial transaction processing and asset management. We recognize that our people are our strength and the diverse talents they bring to our global workforce are directly linked to our success. We are an equal opportunity employer and place a high value on diversity and inclusion at our company. We do not discriminate on the basis of any protected attribute, including race, religion, color, national origin, gender, sexual orientation, gender identity, gender expression, age, marital or veteran status, pregnancy or disability, or any other basis protected under applicable law. We also make reasonable accommodations for applicants’ and employees’ religious practices and beliefs, as well as mental health or physical disability needs. Visit our FAQs for more information about requesting an accommodation. About The Team Our Corporate Technology team relies on smart, driven people like you to develop applications and provide tech support for all our corporate functions across our network. Your efforts will touch lives all over the financial spectrum and across all our divisions: Global Finance, Corporate Treasury, Risk Management, Human Resources, Compliance, Legal, and within the Corporate Administrative Office. You’ll be part of a team specifically built to meet and exceed our evolving technology needs, as well as our technology controls agenda.

Posted 1 day ago

Apply

4.0 years

0 Lacs

Pune, Maharashtra, India

On-site

JOB_POSTING-3-73069-5 Job Description Role Title: AVP, Detection Operations (L10) Company Overview Synchrony (NYSE: SYF) is a premier consumer financial services company delivering one of the industry’s most complete digitally enabled product suites. Our experience, expertise and scale encompass a broad spectrum of industries including digital, health and wellness, retail, telecommunications, home, auto, outdoors, pet and more. We have recently been ranked #2 among India’s Best Companies to Work for by Great Place to Work. We were among the Top 50 India’s Best Workplaces in Building a Culture of Innovation by All by GPTW and Top 25 among Best Workplaces in BFSI by GPTW. We have also been recognized by AmbitionBox Employee Choice Awards among the Top 20 Mid-Sized Companies, ranked #3 among Top Rated Companies for Women, and Top-Rated Financial Services Companies. Synchrony celebrates ~52% women talent. We offer Flexibility and Choice for all employees and provide best-in-class employee benefits and programs that cater to work-life integration and overall well-being. We provide career advancement and upskilling opportunities, focusing on Advancing Diverse Talent to take up leadership roles. Organizational Overview This role resides in the Security Automation and Detection Operations organization which is global team responsible for streamlining security events through automation and optimizing detection development. This team partners closely with Technical Intel, JSOC, and other partners to develop and deliver high fidelity security alerting to protect Synchrony from cyber threats. Role Summary/Purpose The AVP, Detection Operations candidate is responsible for managing and optimizing the Splunk ES (Enterprise Security) to enhance security operations and threat detection. Responsibilities include managing Splunk Mission Control for improving incident response workflows. Key duties include building and maintaining Splunk ES data models, assets, and identities to improve analytics, entity correlation, and security posture. The role requires developing and managing alert actions to automate and optimize threat detection and response processes. Additionally, this position involves leveraging CDLC pipelines to facilitate detection logic integration. Automated validation of logs and detection logic is also essential to ensure accuracy and reliability in threat detection and response. In this role, a combination of technical expertise in Splunk ES, security operations, and automation is required to enhance system performance, ensure timely response to security incidents, and drive efficient security analytics. Key Responsibilities Splunk Mission Control: Develop and manage Splunk Mission Control to enhance incident response capabilities and streamline security operations. CDLC Pipelines/ Detection as Code: Employ CDLC pipelines to expedite and integrate detection logic across systems. Automated Validation: Develop automated validation mechanisms for critical logs and detection logic, ensuring high accuracy and reliability in threat detection. Required Skills/Knowledge Bachelors degree with 4+ years of experience with Information Security along with Splunk ES and in lieu of degree with 6+ years of experience required. 4 years of Splunk ES Administration: Expertly manage the overall administration of Splunk ES, ensuring optimal performance, scalability, and reliability of the system. 4 years of Splunk Search Processing Language (SPL): Proficiently utilize Splunk SPL for querying, analyzing, and visualizing data to inform timely security decisions. 4 years of Data Models: Build, manage, and effectively leverage Splunk ES data models to enhance data analytics, security insights, and detection logic. Assets & Identities: Construct and manage comprehensive Splunk ES assets and identities, ensuring accurate security posture and entity correlation. Alert Actions: Develop, manage, and leverage Splunk ES alert actions to automate and optimize threat detection and response processes. Programming Expertise: Utilize Python and HTTP client programming to integrate and automate security solutions efficiently. Desired Skills/Knowledge Previous experience in working with or in SOC and Incident Response programs Experienced working in organizations that leverage agile methodologies. Experience working in cloud environments (AWS/Azure). Eligibility Criteria Bachelors degree with 4+ years of experience with Information Security along with Splunk ES and in lieu of degree with 6+ years of experience required. Work Timings : 3pm to 12am IST This role qualifies for Enhanced Flexibility and Choice offered in Synchrony India and will require the incumbent to be available between 06:00 AM Eastern Time – 11:30 AM Eastern Time (timings are anchored to US Eastern hours and will adjust twice a year locally). This window is for meetings with India and US teams. The remaining hours will be flexible for the employee to choose. Exceptions may apply periodically due to business needs. Please discuss this with the hiring manager for more details . For Internal Applicants Understand the criteria or mandatory skills required for the role, before applying Inform your manager and HRM before applying for any role on Workday Ensure that your professional profile is updated (fields such as education, prior experience, other skills) and it is mandatory to upload your updated resume (Word or PDF format) Must not be any corrective action plan (First Formal/Final Formal, LPP) L8+ Employees who have completed 18 months in the organization and 12 months in current role and level are only eligible L08+ Employees can apply Grade/Level:10 Job Family Group Information Technology

Posted 1 day ago

Apply

10.0 years

0 Lacs

India

Remote

Who We Are Artmac Soft is a technology consulting and service-oriented IT company dedicated to providing innovative technology solutions and services to customers. Job Description Job Title : Senior Power BI Developer Job Type : Contract Experience : 10 -15 years Location : Hyderabad (Remote) Must-Have Skills Power BI: 10+ years of experience, DAX, Power Query, data modeling, report publishing Snowflake: Strong SQL skills for querying and transforming data SQL Server: Advanced SQL, stored procedures, views, and integrations Sales Reporting: KPIs like revenue, pipeline, conversion rates, forecasts Key Responsibilities Design, develop, and deploy interactive Power BI dashboards and reports Connect to Snowflake and SQL Server to extract, transform, and load data Create and manage data models, DAX measures, and calculated columns Work closely with business teams to understand sales reporting needs Visualize key sales metrics: targets vs. achievement, pipeline, trends, etc. Perform data validation, ensure accuracy, and resolve anomalies Implement role-level security and manage workspace permissions in Power BI Service Optimize reports for performance, scalability, and usability Document dashboards, logic, and technical specifications Collaborate with engineers, analysts, and sales ops teams for continuous improvements Qualification Bachelor's degree or equivalent combination of education and experience

Posted 1 day ago

Apply

5.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Skills & Qualifications 5+ Years of Experience in test automation. Strong hands-on experience with manual testing of web applications and APIs. Proficiency in developing automated tests using tools such as Selenium, Cypress, or similar. Experience with API testing tools like Postman, REST-assured, or similar. Familiarity with React components and DOM structure for effective front-end testing. Experience testing applications developed in Perl or ability to understand legacy Perl systems. Knowledge of version control systems (e.g., Git). Understanding of CI/CD workflows (e.g., Jenkins, GitHub Actions). Strong analytical and troubleshooting skills. GTH experience with Github and Co Pilot. Excellent communication and documentation skills. Nice to Have Familiarity with BDD/TDD frameworks (e.g., Cucumber, RSpec). Experience with performance testing tools (e.g., JMeter, Gatling). Exposure to containerized environments (Docker, Kubernetes). Basic scripting in Perl or the willingness to learn it. Understanding of database querying (SQL or NoSQL). Why Join Us Be part of a collaborative and inclusive team. Opportunity to work on modern technologies with a legacy twist. Career growth and learning opportunities. Flexible work arrangements and work-life balance. Applicants may be required to appear onsite at a Wolters Kluwer office as part of the recruitment process.

Posted 1 day ago

Apply

100.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

About H.E. Services: At H.E. Services vibrant tech Center in Hyderabad, you will have the opportunity to contribute to technology innovation for Holman Automotive, a leading American fleet management and automotive services company. Our goal is to continue investing in people, processes, and facilities to ensure expansion in a way that allows us to support our customers and develop new tech solutions. Holman has come a long way during its first 100 years in business. The automotive markets Holman serves include fleet management and leasing; vehicle fabrication and up fitting; component manufacturing and productivity solutions; powertrain distribution and logistics services; commercial and personal insurance and risk management; and retail automotive sales as one of the largest privately owned dealership groups in the United States. Join us and be part of a team that's transforming the way Holman operates, creating a more efficient, data-driven, and customer-centric future. Roles & Responsibilities: Design, develop, and maintain data pipelines using Databricks , Spark , and other Azure cloud technologies. Optimize data pipelines for performance, scalability, and reliability, ensuring high speed and availability of data warehouse performance. Develop and maintain ETL processes using Databricks and Azure Data Factory for real-time or trigger-based data replication. Ensure data quality and integrity throughout the data lifecycle, implementing new data validation methods and analysis tools. Collaborate with data scientists, analysts, and stakeholders to understand and meet their data needs. Troubleshoot and resolve data-related issues, providing root cause analysis and recommendations. Manage a centralized data warehouse in Azure SQL to create a single source of truth for organizational data, ensuring compliance with data governance and security policies. Document data pipeline specifications, requirements, and enhancements, effectively communicating with the team and management. Leverage AI/ML capabilities to create innovative data science products. Champion and maintain testing suites, code reviews, and CI/CD processes. Must Have: Strong knowledge of Databricks architecture and tools. Proficient in SQL , Python , and PySpark for querying databases and data processing. Experience with Azure Data Lake Storage (ADLS) , Blob Storage , and Azure SQL . Deep understanding of distributed computing and Spark for data processing. Experience with data integration and ETL tools, including Azure Data Factory. Advanced-level knowledge and practice of: Data warehouse and data lake concepts and architectures. Optimizing performance of databases and servers. Managing infrastructure for storage and compute resources. Writing unit tests and scripts. Git, GitHub, and CI/CD practices. Good to Have: Experience with big data technologies, such as Kafka , Hadoop , and Hive . Familiarity with Azure Databricks Medallion Architecture with DLT and Iceberg. Experience with semantic layers and reporting tools like Power BI . Relevant Work Experience: 5+ years of experience as a Data Engineer, ETL Developer, or similar role, with a focus on Databricks and Spark. Experience working on internal, business-facing teams. Familiarity with agile development environments. Education and Training: Bachelor's degree in computer science, Engineering, or a related field, or equivalent work experience.

Posted 1 day ago

Apply

0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Company Description WNS (Holdings) Limited (NYSE: WNS), is a leading Business Process Management (BPM) company. We combine our deep industry knowledge with technology and analytics expertise to co-create innovative, digital-led transformational solutions with clients across 10 industries. We enable businesses in Travel, Insurance, Banking and Financial Services, Manufacturing, Retail and Consumer Packaged Goods, Shipping and Logistics, Healthcare, and Utilities to re-imagine their digital future and transform their outcomes with operational excellence.We deliver an entire spectrum of BPM services in finance and accounting, procurement, customer interaction services and human resources leveraging collaborative models that are tailored to address the unique business challenges of each client. We co-create and execute the future vision of 400+ clients with the help of our 44,000+ employees. Job Description Job Description: We are looking for a meticulous and experienced BI Reporting QA professional to lead the team and ensure the accuracy and reliability of our Business Intelligence (BI) reports and data visualizations. The BI Reporting QA plays a critical role in validating and verifying the quality of reports and dashboards, ensuring they provide dependable insights for decision-making. Responsibilities: Design and develop QA strategy for BI by identifying all the relevant tests and validations. Develop and execute comprehensive test plans and test cases for BI reports, dashboards, and data visualizations to identify defects and discrepancies. Verify data accuracy by comparing data sources to report outputs, identifying inconsistencies, anomalies, and data quality issues. Evaluate the performance and responsiveness of reports and dashboards, ensuring they load efficiently and meet performance expectations. Perform regression testing to confirm that report modifications or updates do not introduce new issues or affect existing functionalities. Collaborate with end-users and stakeholders to conduct UAT and ensure that reports meet business requirements and user expectations. Document and track defects and issues, working closely with BI developers to ensure timely resolution. Create and maintain documentation of test cases, results, and validation procedures for reference and reporting. Ensure that BI reports adhere to data governance principles, including data accuracy, data quality, and data security. Manage and maintain test environments, including data sets and configurations, to support effective testing. Required Skills: Proven experience in Power BI reporting quality assurance Proficiency in designing and writing SQL statements for data querying and QA validation Travel industry experience is essential Strong understanding of BI reporting tools and platforms. Proficiency in data validation, data comparison, and data quality assessment. Expertise in implementation of automation in QA processes Preferences: Relevant BI reporting tool certifications (Microsoft Certified: Power BI). Relevant quality assurance certifications (e.g., ISTQB Certified Tester). Qualifications Graduate Additional Information 100% Work from Office (24 X5) No Mobile Phones/storage devices allowed within the floor Rotational shifts

Posted 1 day ago

Apply

5.0 years

0 Lacs

India

Remote

Welcome to Veradigm! Our Mission is to be the most trusted provider of innovative solutions that empower all stakeholders across the healthcare continuum to deliver world-class outcomes. Our Vision is a Connected Community of Health that spans continents and borders. With the largest community of clients in healthcare, Veradigm is able to deliver an integrated platform of clinical, financial, connectivity and information solutions to facilitate enhanced collaboration and exchange of critical patient information. Veradigm Veradigm is here to transform health, insightfully. Veradigm delivers a unique combination of point-of-care clinical and financial solutions, a commitment to open interoperability, a large and diverse healthcare provider footprint, along with industry proven expert insights. We are dedicated to simplifying the complicated healthcare system with next-generation technology and solutions, transforming healthcare from the point-of-patient care to everyday life. For more information, please explore www.veradigm.com. Job Summary What will your job look like: We are seeking a detail-oriented and experienced Database and Backend Test Engineer with 5+ years of experience in testing large-scale data platforms , including Snowflake , Azure Data Services , and backend services. The ideal candidate will be responsible for validating data pipelines, backend logic, stored procedures, and integrations, ensuring the accuracy, performance, and quality of enterprise data systems. Key Responsibilities Design and implement test strategies for backend systems and data pipelines across Snowflake and Azure environments. Write and execute complex SQL queries to validate transformations, stored procedures, and data quality. Perform ETL testing, data reconciliation, schema validation, and metadata checks. Collaborate with data engineers and developers to verify pipeline performance, reliability, and scalability. Build and maintain automated test scripts using tools like pytest, dbt, or custom SQL-based frameworks. Integrate database tests into CI/CD pipelines using tools such as Azure DevOps, GitHub Actions, or Jenkins. Perform root cause analysis on data issues and communicate findings with relevant teams. Monitor and validate data processing jobs and schedule validations using Azure Data Factory, Synapse, or Data Bricks. Document test scenarios, data sets, and validation logs in a structured manner. An Ideal Candidate Will Have Required Skills & Qualifications: 5+ years of experience in database and backend testing. Strong hands-on experience with Snowflake – including data modeling, querying, and security roles. Experience with Azure data tools such as Azure SQL, Data Factory, Synapse Analytics, or Data Lake. Advanced proficiency in SQL and performance tuning. Experience with ETL/ELT testing and validation of data migration or transformation logic. Familiarity with Python or Shell scripting for data test automation. Knowledge of CI/CD integration for test automation. Strong understanding of data quality frameworks, data governance, and test reporting. Preferred Qualifications Experience with dbt, Great Expectations, or other data validation tools. Exposure to cloud storage validation (Azure Blob, ADLS). Experience in testing APIs for data services or backend integrations. Knowledge of data privacy and compliance frameworks (e.g., GDPR, HIPAA). Benefits Veradigm believes in empowering our associates with the tools and flexibility to bring the best version of themselves to work. Through our generous benefits package with an emphasis on work/life balance, we give our employees the opportunity to allow their careers to flourish. Quarterly Company-Wide Recharge Days Flexible Work Environment (Remote/Hybrid Options) Peer-based incentive “Cheer” awards “All in to Win” bonus Program Tuition Reimbursement Program To know more about the benefits and culture at Veradigm, please visit the links mentioned below: - https://veradigm.com/about-veradigm/careers/benefits/ https://veradigm.com/about-veradigm/careers/culture/ We are an Equal Opportunity Employer. No job applicant or employee shall receive less favorable treatment or be disadvantaged because of their gender, marital or family status, color, race, ethnic origin, religion, disability or age; nor be subject to less favorable treatment or be disadvantaged on any other basis prohibited by applicable law. Veradigm is proud to be an equal opportunity workplace dedicated to pursuing and hiring a diverse and inclusive workforce. Thank you for reviewing this opportunity! Does this look like a great match for your skill set? If so, please scroll down and tell us more about yourself!

Posted 1 day ago

Apply

5.0 years

0 Lacs

Delhi, India

On-site

About Cisive Cisive is a trusted partner for comprehensive, high-risk compliance-driven background screening and workforce monitoring solutions, specializing in highly regulated industries—such as healthcare, financial services, and transportation. We catch what others miss, and we are dedicated to helping our clients effortlessly secure the right talent. As a global leader, Cisive empowers organizations to hire with confidence. Through our PreCheck division, Cisive provides specialized background screening and credentialing solutions tailored for healthcare organizations, ensuring patient and workforce safety. Driver iQ, our transportation-focused division, delivers FMCSA-compliant screening and monitoring solutions that help carriers hire and retain the safest drivers on the road. Unlike traditional background screening providers, Cisive takes a technology-first approach powered by advanced automation, human expertise, and compliance intelligence—all delivered through a scalable platform. Our solutions include continuous workforce monitoring, identity verification, criminal record screening, license monitoring, drug & health screening, and global background checks. Job Summary The Senior Software Developer is responsible for designing and delivering complex, scalable software systems, leading technical initiatives, and mentoring junior developers. This role plays a key part in driving high-impact projects and ensuring the delivery of robust, maintainable solutions. In addition to core development duties, the role works closely with the business to identify opportunities for automation and web scraping to improve operational efficiency. The Senior Software Developer will collaborate with Cisive’s Software Development team and client stakeholders to support, analyze, mine, and report on IT and business data—focusing on optimizing data handling for web scraping processes. This individual will manage and consult on data flowing into and out of Cisive systems, ensuring data integrity, performance, and compliance with operational standards. The role is critical to achieving service excellence and automation across Cisive’s diverse product offerings and will continuously strive to enhance process efficiency and data flow across platforms. Duties And Responsibilities Lead the design, architecture, and implementation of scalable and maintainable web scraping solutions using the Scrapy framework, integrated with tools such as Kafka, Zookeeper, and Redis Develop and maintain web crawlers to automate data extraction from various sources, ensuring alignment with user and application requirements Research, design, and implement automation strategies across multiple platforms, tools, and technologies to optimize business processes Monitor, troubleshoot, and resolve issues affecting the performance, reliability, and stability of scraping systems and automation tools Serve as a Subject Matter Expert (SME) for automation systems, providing guidance and support to internal teams Analyze and validate extracted data to ensure accuracy, integrity, and compliance with Cisive’s data standards Define, implement, and enforce data requirements, standards, and best practices to ensure consistent and efficient operations Collaborate with stakeholders and end users to define technical requirements, business goals, and alternative solutions for data collection and reporting Create, manage, and document reports, processes, policies, and project plans, including risk assessments and goal tracking Conduct code reviews, enforce coding standards, and provide technical leadership and mentorship to development team members Proactively identify and mitigate technical risks, recommending improvements in technologies, tools, and processes Drive the adoption of modern development tools, frameworks, and best practices Contribute to strategic planning related to automation initiatives and product development Ensure clear, thorough communication and documentation across teams to support knowledge sharing and training Minimum Qualifications Bachelor’s degree in Computer Science, Software Engineering, or related field. 5+ years of professional software development experience. Strong proficiency in HTML, XML, XPath, XSLT, and Regular Expressions for data extraction and transformation Hands-on experience with Visual Studio Strong proficiency in Python Some experience with C# .NET Solid experience with MS SQL Server, with strong skills in SQL querying and data analysis Experience with web scraping, particularly using the Scrapy framework integrated with Kafka, Zookeeper, and Redis Experience with .NET automation tools such as Selenium Understanding of CAPTCHA-solving services and working with proxy services Experience working in a Linux environment is a plus Highly self-motivated and detail-oriented, with a proactive, goal-driven mindset Strong team player with dependable work habits and well-developed interpersonal skills Excellent verbal and written communication skills Demonstrates willingness and flexibility to adapt schedule when necessary to meet client needs.

Posted 1 day ago

Apply

3.0 years

0 Lacs

Gandhinagar, Gujarat, India

On-site

Profile - .NET Nopcommerce Developer Experience : 3+ Years Location : GIFT City, Gandhinagar (Work From Office) Requirements : Required Technical Skills Strong proficiency in C# and ASP.NET Core (or ASP.NET MVC). Solid experience with the NopCommerce platform, including its architecture, customization capabilities, and plugin development. Proficiency in SQL Server for database design, querying, and optimization. Good understanding of front-end technologies: HTML5, CSS3, JavaScript, jQuery. Experience with Entity Framework or other ORMs. Familiarity with web services and API integration (RESTful APIs). Experience with version control systems, preferably Git. Preferred Skills (Nice to Have) Knowledge of front-end frameworks like React, Angular, or Vue.js. Experience with cloud platforms like Azure or AWS. Familiarity with Agile/Scrum development methodologies. Experience with performance optimization techniques for e-commerce websites. Desirable Qualities / Soft Skills Excellent problem-solving and analytical skills. Strong attention to detail and ability to write clean, maintainable code. Good communication1 and interpersonal skills. Ability to work independently and as part of a collaborative team. Proactive Problem Solver: Ability to identify issues, propose solutions, and implement them effectively. Continuous Learner: Eagerness to learn new technologies, adapt to evolving industry trends, and keep skills sharp. Self-motivated with a strong desire to learn and grow. Responsibilities Design, develop, and maintain scalable and high-performance e-commerce solutions using the NopCommerce platform. Customize and extend NopCommerce functionalities, including plugin development, theme integration, and core modifications, to meet specific client requirements. Implement best practices for coding, testing, and deployment to ensure the reliability, security, and maintainability of the codebase. Collaborate with cross-functional teams (e.g., project managers, designers, QA testers) to translate business requirements into technical solutions. Troubleshoot, debug, and resolve technical issues related to NopCommerce implementations. Participate in code reviews and contribute to the improvement of development processes and standards. Participate in daily stand-ups and agile ceremonies, contributing actively to sprint planning and reviews. Assist in the deployment and maintenance of NopCommerce applications on various environments (e.g., development, staging, production). Create and maintain technical documentation for developed modules and features. Stay updated with the latest trends and technologies in e-commerce and NopCommerce development.

Posted 1 day ago

Apply

0 years

0 Lacs

Jamshedpur, Jharkhand, India

On-site

To be a successful MES (Manufacturing Execution System) developer, you need a blend of technical skills, including proficiency in programming languages like Java, C#, Python, and SQL, along with knowledge of MES software, ERP systems, and industrial automation systems like PLCs and SCADA. Here's a more detailed breakdown of the key skill sets: Technical Skills: Programming Languages: SQL: Proficiency in these languages is crucial for developing and customizing MES applications. Other Languages: Depending on the specific MES software, familiarity with other languages like XML, VBScript, or .NET might be required. MES Software & Systems: MES Software: Experience with specific MES platforms (e.g., Avea, GE, Emerson. Rockwell FactoryTalk Production Suite, SAP, etc.) is highly valued. ERP Systems: Understanding how MES integrates with ERP systems (e.g., SAP, Oracle) is essential. SCADA & PLCs: Knowledge of industrial automation systems, including SCADA (Supervisory Control and Data Acquisition) and PLCs (Programmable Logic Controllers), is crucial for understanding the manufacturing environment. Databases: SQL Databases: Strong SQL skills are needed for data management, querying, and reporting within the MES system. Other Databases: Familiarity with other database technologies might be beneficial depending on the MES platform. Scripting & Reporting: Scripting: Ability to write scripts for automating tasks and customizing the MES system. Report Generation: Experience with generating reports and dashboards to monitor manufacturing performance. API Integration & System Connectivity: APIs: Knowledge of APIs and their use for integrating MES with other systems. System Connectivity: Understanding how to connect MES with various devices and systems (e.g., sensors, PLCs, ERP). Troubleshooting & Debugging: Troubleshooting: Ability to diagnose and resolve issues within the MES system. Debugging: Experience with debugging MES software and applications. Industry Knowledge:

Posted 1 day ago

Apply

0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Equity Index Data Management is part of the Benchmarks & Indices Operations. It is a customer focused, technology enabled team, producing High value & outstanding analytics & data derivations to drive insight and value to our internal and external customers. Reference Data & Analytics is responsible for creating LSEG Intellectual Property by generating ICB Classifications, Free Float and similar data fields based on data standardization from raw data. For our team based in Bengaluru, India. we are currently recruiting for an Analyst in our Equity Index Data Management Team. Analyst will handle a wide range of data & information on a universe of companies, and for ensuring quality and accuracy in all our products. Key Responsibilities Of The Role Become a subject matter expertise to Analyze, Extract, Standardize & Maintain quantitative and qualitative data from various sources to a high degree of quality following defined policies and business rules Helps to support organizational change projects with creating and handling applicable processes Support managers and leaders by supporting the iterative development of Key Performance Indicators, metrics and other management reporting needs, as needed Strong technology literacy, including advanced knowledge of the MS Office suite of products, strong functional understanding of database querying language (SQL) and familiarity with Python, or similar. Design & Implement key proactive data quality controls to minimize risk by investigation and resolving data exceptions and breaks Perform quality control and provide specific feedback to improve quality over time. Report on results to managers, including root cause and actions plans for improvement Develop strong domain expertise on how data is used by collaborators and clients and ensure that internal processes align to deliver customer satisfaction Manage a team in activities to reduce effort spent on data management and analysis by using automation or by process optimization, often by use of 6Sigma or similar industry standard tools Own key processes end to end, and develop and maintain methodology guidance, & quality rules and documentation Work effectively with peers, research analysts, and manager - both within the team as well as other teams in multiple time zones. Helps and works with staff at various levels and may help to support on/off boarding processes, as necessary Supports the current data policy framework including the charter, group data policy, standards, procedures and playbooks, and acts as an implementation expert to business partners and corporate functions Adapt to changes to technology, process, systems as we transform our business Demonstrates a significant degree of ingenuity, creativity and resourcefulness Key behaviors and skills required to be successful in the role: Bachelor’s Degree or equivalent experience Expertise in SQL, Python, PowerBI Excellent English Language Skills (write, read, talk) Excellent problem-solving skills Ability to work independently with high degree of ownership Ability to meet strict weekly timeliness and quality standards established by the department Ability to multi-task and be results driven Six Sigma or other process improvement certifications a plus Key Accountabilities : Performs data investigation on diagnostic, descriptive, and statistical output during the course of research projects Participates in the development of new business proposals and assists on other on-going project work Oversees and reviews the work of Analysts, helping to train and develop them Maximises specific market / language skills to deliver on customer / business requirements, adhering to the quality metrics set Leads in the resolution of client cases by fixing the data integrity, missing data and other similar issues Processes data in an accurate and timely manner using the knowledge of the tools and financial markets or compliance industry Uses the approved sources to identify the data, convert / make valuable contributions to the data or use the data as is defined in the collection policy Collaborates with internal teams and external teams to support sourcing and collection of data Supports specific complex projects assigned and meets or exceeds the critical metrics defined. Supports the implementation of process improvement ideas to improve efficiency and customer experience. LSEG is a leading global financial markets infrastructure and data provider. Our purpose is driving financial stability, empowering economies and enabling customers to create sustainable growth. Our purpose is the foundation on which our culture is built. Our values of Integrity, Partnership , Excellence and Change underpin our purpose and set the standard for everything we do, every day. They go to the heart of who we are and guide our decision making and everyday actions. Working with us means that you will be part of a dynamic organisation of 25,000 people across 65 countries. However, we will value your individuality and enable you to bring your true self to work so you can help enrich our diverse workforce. You will be part of a collaborative and creative culture where we encourage new ideas and are committed to sustainability across our global business. You will experience the critical role we have in helping to re-engineer the financial ecosystem to support and drive sustainable economic growth. Together, we are aiming to achieve this growth by accelerating the just transition to net zero, enabling growth of the green economy and creating inclusive economic opportunity. LSEG offers a range of tailored benefits and support, including healthcare, retirement planning, paid volunteering days and wellbeing initiatives. We are proud to be an equal opportunities employer. This means that we do not discriminate on the basis of anyone’s race, religion, colour, national origin, gender, sexual orientation, gender identity, gender expression, age, marital status, veteran status, pregnancy or disability, or any other basis protected under applicable law. Conforming with applicable law, we can reasonably accommodate applicants' and employees' religious practices and beliefs, as well as mental health or physical disability needs. Please take a moment to read this privacy notice carefully, as it describes what personal information London Stock Exchange Group (LSEG) (we) may hold about you, what it’s used for, and how it’s obtained, your rights and how to contact us as a data subject. If you are submitting as a Recruitment Agency Partner, it is essential and your responsibility to ensure that candidates applying to LSEG are aware of this privacy notice.

Posted 1 day ago

Apply

0 years

25 - 32 Lacs

Gurgaon, Haryana, India

On-site

Company: Sun King Website: Visit Website Business Type: Enterprise Company Type: Product & Service Business Model: Others Funding Stage: Series D+ Industry: Eenewable Energy Salary Range: ₹ 25-32 Lacs PA Job Description About the role: Sun King is looking for a self-driven Infrastructure engineer, who is comfortable working in a fast-paced startup environment and balancing the needs of multiple development teams and systems. You will work on improving our current IAC, observability stack, and incident response processes. You will work with the data science, analytics, and engineering teams to build optimized CI/CD pipelines, scalable AWS infrastructure, and Kubernetes deployments. What You Would Be Expected To Do Work with engineering, automation, and data teams to work on various infrastructure requirements. Designing modular and efficient GitOps CI/CD pipelines, agnostic to the underlying platform. Managing AWS services for multiple teams. Managing custom data store deployments like sharded MongoDB clusters, Elasticsearch clusters, and upcoming services. Deployment and management of Kubernetes resources. Deployment and management of custom metrics exporters, trace data, custom application metrics, and designing dashboards, querying metrics from multiple resources, as an end-to-end observability stack solution. Set up incident response services and design effective processes. Deployment and management of critical platform services like OPA and Keycloak for IAM. Advocate best practices for high availability and scalability when designing AWS infrastructure, observability dashboards, implementing IAC, deploying to Kubernetes, and designing GitOps CI/CD pipelines. You Might Be a Strong Candidate If You Have/are Hands-on experience with Docker or any other container runtime environment and Linux with the ability to perform basic administrative tasks. Experience working with web servers (nginx, apache) and cloud providers (preferably AWS). Hands-on scripting and automation experience (Python, Bash), experience debugging and troubleshooting Linux environments and cloud-native deployments. Experience building CI/CD pipelines, with familiarity with monitoring & alerting systems (Grafana, Prometheus, and exporters). Knowledge of web architecture, distributed systems, and single points of failure. Familiarity with cloud-native deployments and concepts like high availability, scalability, and bottleneck. Good networking fundamentals - SSH, DNS, TCP/IP, HTTP, SSL, load balancing, reverse proxies, and firewalls. Good To Have Experience with backend development and setting up databases and performance tuning using parameter groups. Working experience in Kubernetes cluster administration and Kubernetes deployments. Experience working alongside sec ops engineers. Basic knowledge of Envoy, service mesh (Istio), and SRE concepts like distributed tracing. Setup and usage of open telemetry, central logging, and monitoring systems.

Posted 1 day ago

Apply

0 years

25 - 30 Lacs

Gurgaon, Haryana, India

On-site

Company: Sun King Website: Visit Website Business Type: Enterprise Company Type: Product & Service Business Model: Others Funding Stage: Series D+ Industry: Renewable Energy Salary Range: ₹ 25-30 Lacs PA Job Description About the role: Sun King is looking for a self-driven Infrastructure engineer, who is comfortable working in a fast-paced startup environment and balancing the needs of multiple development teams and systems. You will work on improving our current IAC, observability stack, and incident response processes. You will work with the data science, analytics, and engineering teams to build optimized CI/CD pipelines, scalable AWS infrastructure, and Kubernetes deployments. What You Would Be Expected To Do Work with engineering, automation, and data teams to work on various infrastructure requirements. Designing modular and efficient GitOps CI/CD pipelines, agnostic to the underlying platform. Managing AWS services for multiple teams. Managing custom data store deployments like sharded MongoDB clusters, Elasticsearch clusters, and upcoming services. Deployment and management of Kubernetes resources. Deployment and management of custom metrics exporters, trace data, custom application metrics, and designing dashboards, querying metrics from multiple resources, as an end-to-end observability stack solution. Set up incident response services and design effective processes. Deployment and management of critical platform services like OPA and Keycloak for IAM. Advocate best practices for high availability and scalability when designing AWS infrastructure, observability dashboards, implementing IAC, deploying to Kubernetes, and designing GitOps CI/CD pipelines. You Might Be a Strong Candidate If You Have/are Hands-on experience with Docker or any other container runtime environment and Linux with the ability to perform basic administrative tasks. Experience working with web servers (nginx, apache) and cloud providers (preferably AWS). Hands-on scripting and automation experience (Python, Bash), experience debugging and troubleshooting Linux environments and cloud-native deployments. Experience building CI/CD pipelines, with familiarity with monitoring & alerting systems (Grafana, Prometheus, and exporters). Knowledge of web architecture, distributed systems, and single points of failure. Familiarity with cloud-native deployments and concepts like high availability, scalability, and bottleneck. Good networking fundamentals - SSH, DNS, TCP/IP, HTTP, SSL, load balancing, reverse proxies, and firewalls. Good To Have Experience with backend development and setting up databases and performance tuning using parameter groups. Working experience in Kubernetes cluster administration and Kubernetes deployments. Experience working alongside sec ops engineers. Basic knowledge of Envoy, service mesh (Istio), and SRE concepts like distributed tracing. Setup and usage of open telemetry, central logging, and monitoring systems.

Posted 1 day ago

Apply

20.0 years

0 Lacs

India

On-site

Description Over the past 20 years, Amazon has reinvented on behalf of customers and has become the largest internet retailer and marketplace in the world. NOC (Network Operation Center) is the central command and control center for ‘Transportation Execution’ across the Amazon's transportation network. It ensures hassle free, timely pick-up and delivery of freight from vendors to Amazon fulfillment centers (FC) and from Amazon FCs to carrier hubs. In case of any exceptions, NOC steps in to resolve the issue and keeps all the stakeholders informed on the proceedings. Along with this tactical problem solving, NOC is also charged with understanding trends in network exceptions and then automating processes or proposing process changes to streamline operations. This second aspect involves network monitoring and significant analysis of network data. Overall, NOC plays a critical role in ensuring the smooth functioning of Amazon transportation and thereby has a direct impact on Amazon’s ability to serve its customers on time. Overall, NOC plays a critical role in ensuring the smooth functioning of Amazon transportation and thereby has a direct impact on Amazon’s ability to serve its customers on time. Within NOC’s umbrella, resides a specific arm which manages Inbound scheduling, MFI (Missing from inbound), 3P pickups, Vendor returns and invoice scanning operations across India (IN), AMET (South Africa, UAE, KSA, EGY and Turkey), Australia (AU), Japan (JP), Singapore (SG), Brazil (BR) and Mexico (MX). Purview of a Transportation Specialist A Transportation Specialist inbound at NOC facilitates flow of information between different stakeholders (Vendors/Sellers/Inbound Supply chain/category managers/Fulfillment centers) and resolves any potential issues that impacts vendor/seller experience and business continuity. Transportation Specialist at NOC works on Inbound operations which deals with appointment scheduling at Fulfillment centers requested by Vendors/sellers/carriers, ensuring that the truck reaches the FC for shipment delivery from vendors/sellers as per schedule. Transportation specialist on Inbound addresses any potential issues occurring during the lifecycle of freight placement and freight unloading at FCs. A Transportation Specialist provides timely resolution to the issue hand in hand by researching and querying internal tools and by taking real-time decisions. An ideal candidate should be able to understand the requirements/be able to analyze data and notice trends and be able to drive vendor/seller Experience without compromising on time. The candidate should have a basic understanding of Logistics and should be able to communicate clearly in the written and oral form. Transportation Specialist should be able to ideate process improvements and should have the zeal to drive them to conclusion. Key job responsibilities Responsibilities Include, But Are Not Limited To Responsibilities include, but are not limited to: Communication with external customers (Carriers, Vendors/Suppliers) and internal customers (Business, Planning, Fulfillment Centers etc) for freight scheduling/delays in arrivals/delays in unloading at FC or any other disruptions in the transportation network. Ability to pull data from Amazon tools to perform reporting and analysis thereby providing visibility to the leaders and stakeholders Develop and/or understand performance metrics (ex: capacity utilization at Amazon FCs) to assist with driving business results. Must be able to quickly understand the business impact of the trends and make decisions that make sense based on available data. Must be able to systematically escalate problems or variance in the information and data to the relevant owners and teams and follow through on the resolutions to ensure they are delivered. Might be required to work a flexible schedule/shift/work area, including weekends, nights, and/or holidays as per business. Providing real-time vendor/seller experience by working in a fast-paced operating environment. Basic Qualifications Bachelor's degree in a quantitative/technical field such as computer science, engineering, statistics Experience with Excel Experience with SQL Preferred Qualifications -Bachelor's degree in a quantitative/technical field such as computer science, engineering, statistics - Experience with Excel - Experience with SQL Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace accommodation or adjustment during the application and hiring process, including support for the interview or onboarding process, please visit https://amazon.jobs/content/en/how-we-hire/accommodations for more information. If the country/region you’re applying in isn’t listed, please contact your Recruiting Partner. Company - ASSPL - Karnataka Job ID: A3031447

Posted 1 day ago

Apply

1.0 years

1 - 3 Lacs

India

On-site

We are a team of KUSHUB MEDIA SOLUTIONS web design and development professionals who love partnering with good people and businesses to help them achieve online success. Job Summary: Kushub Media is seeking a passionate and skilled Backend Developer to join our growing technology team. You will play a crucial role in designing, developing, and maintaining robust and scalable server-side applications and APIs. Leveraging your experience with Node.js and MongoDB, you will contribute to building and enhancing our core products and services. This is an excellent opportunity for a motivated individual to make a significant impact in a dynamic and collaborative environment. Responsibilities: Design, develop, and maintain efficient and reliable backend services and APIs using Node.js. Work extensively with MongoDB to design, implement, and optimize data models and queries. Collaborate closely with frontend developers to integrate backend logic with user-facing elements. Participate in the entire software development lifecycle, including requirements gathering, design, coding, testing, and deployment. Write clean, well-documented, and testable code. Troubleshoot and debug backend issues and provide timely resolutions. Ensure the performance, scalability, and security of our backend systems. Stay up-to-date with the latest industry trends and technologies in backend development. Contribute to code reviews and share best practices within the team. Participate in architectural discussions and contribute to technical decision-making. Qualifications: Bachelor's degree in Computer Science, Engineering, or a related field (or equivalent practical experience). Minimum of 1 year of professional experience in backend development. Solid understanding of JavaScript and the Node.js ecosystem. Proven experience working with MongoDB, including schema design, querying, and indexing. Experience in building RESTful APIs and microservices. Familiarity with version control systems, preferably Git. Understanding of software development principles and best practices. Strong problem-solving and analytical skills. Excellent communication and collaboration skills. To Apply: submit a resume and cover letter to ankit@kushubmedia.com or apply through our website at www.kushubmedia.com Job Types: Full-time, Permanent Pay: ₹15,000.00 - ₹30,000.00 per month Work Location: In person Application Deadline: 25/04/2025

Posted 1 day ago

Apply

10.0 years

0 Lacs

Gurugram, Haryana, India

On-site

Acuity Knowledge Partners (Acuity) is a leading provider of bespoke research, analytics and technology solutions to the financial services sector, including asset managers, corporate and investment banks, private equity and venture capital firms, hedge funds and consulting firms. Its global network of over 6,000 analysts and industry experts, combined with proprietary technology, supports more than 600 financial institutions and consulting companies to operate more efficiently and unlock their human capital, driving revenue higher and transforming operations. Acuity is headquartered in London and operates from 10 locations worldwide. The company fosters a diverse, equitable and inclusive work environment, nurturing talent, regardless of race, gender, ethnicity or sexual orientation. Acuity was established as a separate business from Moody’s Corporation in 2019, following its acquisition by Equistone Partners Europe (Equistone). In January 2023, funds advised by global private equity firm Permira acquired a majority stake in the business from Equistone, which remains invested as a minority shareholder. For more information, visit www.acuitykp.com Position Title- Associate Director (Senior Architect – Data) Department-IT Location- Gurgaon/ Bangalore Job Summary The Enterprise Data Architect will enhance the company's strategic use of data by designing, developing, and implementing data models for enterprise applications and systems at conceptual, logical, business area, and application layers. This role advocates data modeling methodologies and best practices. We seek a skilled Data Architect with deep knowledge of data architecture principles, extensive data modeling experience, and the ability to create scalable data solutions. Responsibilities include developing and maintaining enterprise data architecture, ensuring data integrity, interoperability, security, and availability, with a focus on ongoing digital transformation projects. Key Responsibilities Strategy & Planning Develop and deliver long-term strategic goals for data architecture vision and standards in conjunction with data users, department managers, clients, and other key stakeholders. Create short-term tactical solutions to achieve long-term objectives and an overall data management roadmap. Establish processes for governing the identification, collection, and use of corporate metadata; take steps to assure metadata accuracy and validity. Establish methods and procedures for tracking data quality, completeness, redundancy, and improvement. Conduct data capacity planning, life cycle, duration, usage requirements, feasibility studies, and other tasks. Create strategies and plans for data security, backup, disaster recovery, business continuity, and archiving. Ensure that data strategies and architectures are aligned with regulatory compliance. Develop a comprehensive data strategy in collaboration with different stakeholders that aligns with the transformational projects’ goals. Ensure effective data management throughout the project lifecycle. Acquisition & Deployment Ensure the success of enterprise-level application rollouts (e.g. ERP, CRM, HCM, FP&A, etc.) Liaise with vendors and service providers to select the products or services that best meet company goals Operational Management o Assess and determine governance, stewardship, and frameworks for managing data across the organization. o Develop and promote data management methodologies and standards. o Document information products from business processes and create data entities o Create entity relationship diagrams to show the digital thread across the value streams and enterprise o Create data normalization across all systems and data base to ensure there is common definition of data entities across the enterprise o Document enterprise reporting needs develop the data strategy to enable single source of truth for all reporting data o Address the regulatory compliance requirements of each country and ensure our data is secure and compliant o Select and implement the appropriate tools, software, applications, and systems to support data technology goals. o Oversee the mapping of data sources, data movement, interfaces, and analytics, with the goal of ensuring data quality. o Collaborate with project managers and business unit leaders for all projects involving enterprise data. o Address data-related problems regarding systems integration, compatibility, and multiple-platform integration. o Act as a leader and advocate of data management, including coaching, training, and career development to staff. o Develop and implement key components as needed to create testing criteria to guarantee the fidelity and performance of data architecture. o Document the data architecture and environment to maintain a current and accurate view of the larger data picture. o Identify and develop opportunities for data reuse, migration, or retirement. Data Architecture Design: Develop and maintain the enterprise data architecture, including data models, databases, data warehouses, and data lakes. Design and implement scalable, high-performance data solutions that meet business requirements. Data Governance: Establish and enforce data governance policies and procedures as agreed with stakeholders. Maintain data integrity, quality, and security within Finance, HR and other such enterprise systems. Data Migration: Oversee the data migration process from legacy systems to the new systems being put in place. Define & Manage data mappings, cleansing, transformation, and validation to ensure accuracy and completeness. Master Data Management: Devise processes to manage master data (e.g., customer, vendor, product information) to ensure consistency and accuracy across enterprise systems and business processes. Provide data management (create, update and delimit) methods to ensure master data is governed Stakeholder Collaboration: Collaborate with various stakeholders, including business users, other system vendors, and stakeholders to understand data requirements. Ensure the enterprise system meets the organization's data needs. Training and Support: Provide training and support to end-users on data entry, retrieval, and reporting within the candidate enterprise systems. Promote user adoption and proper use of data. 10 Data Quality Assurance: Implement data quality assurance measures to identify and correct data issues. Ensure the Oracle Fusion and other enterprise systems contain reliable and up-to-date information. Reporting and Analytics: Facilitate the development of reporting and analytics capabilities within the Oracle Fusion and other systems Enable data-driven decision-making through robust data analysis. Continuous Improvement: Continuously monitor and improve data processes and the Oracle Fusion and other system's data capabilities. Leverage new technologies for enhanced data management to support evolving business needs. Technology and Tools: Oracle Fusion Cloud Data modeling tools (e.g., ER/Studio, ERwin) ETL tools (e.g., Informatica, Talend, Azure Data Factory) Data Pipelines: Understanding of data pipeline tools like Apache Airflow and AWS Glue. Database management systems: Oracle Database, MySQL, SQL Server, PostgreSQL, MongoDB, Cassandra, Couchbase, Redis, Hadoop, Apache Spark, Amazon RDS, Google BigQuery, Microsoft Azure SQL Database, Neo4j, OrientDB, Memcached) Data governance tools (e.g., Collibra, Informatica Axon, Oracle EDM, Oracle MDM) Reporting and analytics tools (e.g., Oracle Analytics Cloud, Power BI, Tableau, Oracle BIP) Hyperscalers / Cloud platforms (e.g., AWS, Azure) Big Data Technologies such as Hadoop, HDFS, MapReduce, and Spark Cloud Platforms such as Amazon Web Services, including RDS, Redshift, and S3, Microsoft Azure services like Azure SQL Database and Cosmos DB and experience in Google Cloud Platform services such as BigQuery and Cloud Storage. Programming Languages: (e.g. using Java, J2EE, EJB, .NET, WebSphere, etc.) SQL: Strong SQL skills for querying and managing databases. Python: Proficiency in Python for data manipulation and analysis. Java: Knowledge of Java for building data-driven applications. Data Security and Protocols: Understanding of data security protocols and compliance standards. Key Competencies Qualifications: Education: Bachelor’s degree in computer science, Information Technology, or a related field. Master’s degree preferred. Experience: 10+ years overall and at least 7 years of experience in data architecture, data modeling, and database design. Proven experience with data warehousing, data lakes, and big data technologies. Expertise in SQL and experience with NoSQL databases. Experience with cloud platforms (e.g., AWS, Azure) and related data services. Experience with Oracle Fusion or similar ERP systems is highly desirable. Skills: Strong understanding of data governance and data security best practices. Excellent problem-solving and analytical skills. Strong communication and interpersonal skills. Ability to work effectively in a collaborative team environment. Leadership experience with a track record of mentoring and developing team members. Excellent in documentation and presentations. Good knowledge of applicable data privacy practices and laws. Certifications: Relevant certifications (e.g., Certified Data Management Professional, AWS Certified Big Data – Specialty) are a plus. Behavioral A self-starter, an excellent planner and executor and above all, a good team player Excellent communication skills and inter-personal skills are a must Must possess organizational skills, including multi-task capability, priority setting and meeting deadlines Ability to build collaborative relationships and effectively leverage networks to mobilize resources Initiative to learn business domain is highly desirable Likes dynamic and constantly evolving environment and requirements

Posted 1 day ago

Apply

0 years

0 Lacs

Gurugram, Haryana, India

Remote

Every day, tens of millions of people come to Roblox to explore, create, play, learn, and connect with friends in 3D immersive digital experiences– all created by our global community of developers and creators. At Roblox, we’re building the tools and platform that empower our community to bring any experience that they can imagine to life. Our vision is to reimagine the way people come together, from anywhere in the world, and on any device. We’re on a mission to connect a billion people with optimism and civility, and looking for amazing talent to help us get there. A career at Roblox means you’ll be working to shape the future of human interaction, solving unique technical challenges at scale, and helping to create safer, more civil shared experiences for everyone. Roblox Operating System (ROS) is our internal productivity platform that governs how Roblox operates as a company. Through an integrated suite of tools, ROS shapes how we make talent and personnel decisions, plan and organize work, discover knowledge, and scale efficiently. We are seeking a Senior Data Engineer to enhance our data posture and architecture, synchronizing data across vital third-party systems like Workday, Greenhouse, GSuite, and JIRA, as well as our internal Roblox OS application database. Our Roblox OS app suite encompasses internal tools and third-party applications for People Operations, Talent Acquisition, Budgeting, Roadmapping, and Business Analytics. We envision an integrated platform that streamlines processes while providing employees and leaders with the information they need to support the business. This is a new team in our Roblox India location, working closely with data scientists & analysts, product & engineering, and other stakeholders in India & US. You will report to the Engineering Manager of the Roblox OS Team in your local location and collaborate with Roblox internal teams globally. Work Model : This role is based in Gurugram and follows a hybrid structure — 3 days from the office (Tuesday, Wednesday & Thursday) and 2 days work from home. Shift Time : 2:00pm - 10:30pm IST (Cabs will be provided) You Will Design and Build Scalable Data Pipelines: Architect, develop, and maintain robust, scalable data pipelines using orchestration frameworks like Airflow to synchronize data between internal systems. Implement and Optimize ETL Processes: Apply strong understanding of ETL (Extract, Transform, Load) processes and best practices for seamless data integration and transformation. Develop Data Solutions with SQL: Utilize your proficiency in SQL and relational databases (e.g., PostgreSQL) for advanced querying, data modeling, and optimizing data solutions. Contribute to Data Architecture: Actively participate in data architecture and implementation discussions, ensuring data integrity and efficient data transposition. Manage and optimize data infrastructure, including database, cloud storage solutions, and API endpoints. Write High-Quality Code: Focus on developing clear, readable, testable, modular, and well-monitored code for data manipulation, automation, and software development with a strong emphasis on data integrity. Troubleshoot and Optimize Performance: Apply excellent analytical and problem-solving skills to diagnose data issues and optimize pipeline performance. Collaborate Cross-Functionally: Work effectively with cross-functional teams, including data scientists, analysts, and business stakeholders, to translate business needs into technical data solutions. Ensure Data Governance and Security: Implement data anonymization and pseudonymization techniques to protect sensitive data, and contribute to master data management (MDM) concepts including data quality, lineage, and governance frameworks. You Have Data Engineering Expertise: At least 6+ Proven experience designing, building, and maintaining scalable data pipelines, coupled with a strong understanding of ETL processes and best practices for data integration. Database and Data Warehousing Proficiency: Deep proficiency in SQL and relational databases (e.g., PostgreSQL), and familiarity with at least one cloud-based data warehouse solution (e.g., Snowflake, Redshift, BigQuery). Technical Acumen: Strong scripting skills for data manipulation and automation. Familiarity with data streaming platforms (e.g., Kafka, Kinesis), and knowledge of containerization (e.g., Docker) and cloud infrastructure (e.g., AWS, Azure, GCP) for deploying and managing data solutions. Data & Cloud Infrastructure Management: Experience with managing and optimizing data infrastructure, including database, cloud storage solutions, and configuring API endpoints. Software Development Experience: Experience in software development with a focus on data integrity and transposition, and a commitment to writing clear, readable, testable, modular, and well-monitored code. Problem-Solving & Collaboration Skills: Excellent analytical and problem-solving abilities to troubleshoot complex data issues, combined with strong communication and collaboration skills to work effectively across teams. Passion for Data: A genuine passion for working with amounts of data from various sources, understanding the critical impact of data quality on company strategy at an executive level. Adaptability: Ability to thrive and deliver results in a fast-paced environment with competing priorities. Roles that are based in an office are onsite Tuesday, Wednesday, and Thursday, with optional presence on Monday and Friday (unless otherwise noted). Roblox provides equal employment opportunities to all employees and applicants for employment and prohibits discrimination and harassment of any type without regard to race, color, religion, age, sex, national origin, disability status, genetics, protected veteran status, sexual orientation, gender identity or expression, or any other characteristic protected by federal, state or local laws. Roblox also provides reasonable accommodations for all candidates during the interview process.

Posted 1 day ago

Apply

3.0 years

0 Lacs

Gurugram, Haryana, India

On-site

Job Purpose Assist in building out the backlog of Power BI dashboards, ensuring they meet business requirements and provide actionable insights. Collect and maintain a firmwide inventory of existing reports, identifying those that need to be converted to Power BI. Collaborate with the team to contract and integrate Snowflake, ensuring seamless data flow and accessibility for reporting and analytics. Desired Skills And Experience  Candidates should have a B.E./B.Tech/MCA/MBA in Information Systems, Computer Science or a related field  3+ year’s strong experience in developing and managing Power BI dashboards and reports, preferably within the financial services industry.  Experience required in Data Warehousing, SQL, and hands-on expertise in ETL/ELT processes.  Familiarity with Snowflake data warehousing solutions and integration.  Proficiency in data integration from various sources including APIs and databases.  Proficient in SQL for querying and manipulating data.  Strong understanding of data warehousing concepts and practices.  Experience with deploying and managing dashboards on a Power BI server to service a large number of users.  Familiarity with other BI tools and platforms.  Experience with financial datasets and understanding Private equity metrics.  Knowledge of cloud platforms, particularly Azure, Snowflake, and Databricks.  Excellent problem-solving skills and attention to detail.  Strong communication skills, both written and oral, with a business and technical aptitude  Must possess good verbal and written communication and interpersonal skills Key Responsibilities  Create and maintain interactive and visually appealing Power BI dashboards to visualize data insights.  Assist in building out the backlog of Power BI dashboards, ensuring they meet business requirements and provide actionable insights.  Integrate data from various sources including APIs, databases, and cloud storage solutions such as Azure, Snowflake, and Databricks.  Collect and maintain a firmwide inventory of existing reports, identifying those that need to be converted to Power BI.  Collaborate with the team to contract and integrate Snowflake, ensuring seamless data flow and accessibility for reporting and analytics.  Continuously refine and improve the user interface of dashboards based on ongoing input and feedback.  Monitor and optimize the performance of dashboards to handle large volumes of data efficiently.  Work closely with stakeholders to understand their reporting needs and translate them into effective Power BI solutions.  Ensure the accuracy and reliability of data within Power BI dashboards and reports.  Deploy dashboards onto a Power BI server to be serviced to a large number of users, ensuring high availability and performance.  Ensure that dashboards provide self-service capabilities and are interactive for end-users.  Create detailed documentation of BI processes and provide training to internal teams and clients on Power BI usage  Stay updated with the latest Power BI and Snowflake features and best practices to continuously improve reporting capabilities. Behavioral Competencies  Effectively communicate with business and technology partners, peers and stakeholders  Ability to deliver results under demanding timelines to real-world business problems  Ability to work independently and multi-task effectively  Identify and communicate areas for improvement  Demonstrate high attention to detail, should work in a dynamic environment whilst maintaining high quality standards, a natural aptitude to develop good internal working relationships and a flexible work ethic  Responsible for Quality Checks and adhering to the agreed Service Level Agreement (SLA) / Turn Around Time (TAT)

Posted 1 day ago

Apply

0 years

0 Lacs

Hyderābād

On-site

DESCRIPTION TOC (Transportation Operation Center) is the central command and control center for ‘Transportation Execution’ across the Amazon Supply Chain network supporting multiple geographies like NA, India and EU. It ensures hassle free, timely pick-up and delivery of freight from vendors to Amazon fulfillment centers (FC) and from Amazon FCs to carrier hubs. In case of any exceptions, TOC steps in to resolve the issue and keeps all the stakeholders informed on the proceedings. Along with this tactical problem solving TOC is also charged with understanding trends in network exceptions and then automating processes or proposing process changes to streamline operations. This second aspect involves network monitoring and significant analysis of network data. Overall, TOC plays a critical role in ensuring the smooth functioning of Amazon transportation and thereby has a direct impact on Amazon’s ability to serve its customers on time. Purview of a Trans Ops Specialist A Trans Ops Specialist at TOC facilitates flow of information between different stakeholders (Trans Carriers/Hubs/Warehouses) and resolves any potential issues that impacts customer experience and business continuity. Trans Ops Specialist at TOC works across two verticals – Inbound and Outbound operations. Inbound Operations deals with Vendor/Carrier/FC relationship, ensuring that the freight is picked-up on time and is delivered at FC as per the given appointment. Trans Ops Specialist on Inbound addresses any potential issues occurring during the lifecycle of pick-up to delivery. Outbound Operations deals with FC/Carrier/Carrier Hub relationship, ensuring that the truck leaves the FC in order to delivery customer orders as per promise. Trans Ops Specialist on Outbound addresses any potential issues occurring during the lifecycle of freight leaving the FC and reaching customer premises. A Trans Ops Specialist provides timely resolution to the issue in hand by researching and querying internal tools and by taking real-time decisions. An ideal candidate should be able to understand the requirements/be able to analyze data and notice trends and be able to drive Customer Experience without compromising on time. The candidate should have the basic understanding of Logistics and should be able to communicate clearly in the written and oral form. Trans Ops Specialist should be able to ideate process improvements and should have the zeal to drive them to conclusion. Responsibilities include, but are not limited to: Communication with external customers (Carriers, Vendors/Suppliers) and internal customers (Retail, Finance, Software Support, Fulfillment Centers) Ability to pull data from numerous databases (using Excel, Access, SQL and/or other data management systems) and to perform ad hoc reporting and analysis as needed is a plus. Develop and/or understand performance metrics to assist with driving business results. Ability to scope out business and functional requirements for the Amazon technology teams who create and enhance the software systems and tools are used by TOC. Must be able to quickly understand the business impact of the trends and make decisions that make sense based on available data. Must be able to systematically escalate problems or variance in the information and data to the relevant owners and teams and follow through on the resolutions to ensure they are delivered. Work within various time constraints to meet critical business needs, while measuring and identifying activities performed. Excellent communication, both verbal and written as one may be required to create a narrative outlining weekly findings and the variances to goals, and present these finding in a review forum. Providing real-time customer experience by working in 24*7 operating environment. A day in the life About the hiring group Job responsibilities A day in the life About the hiring group Job responsibilities BASIC QUALIFICATIONS Bachelor’s degree 10-24 months of work experience. - Good communication skills - Trans Ops Specialist will be facilitating flow of information between external teams Proficiency in Advanced Excel (pivot tables, vlookups) Demonstrated ability to work in a team in a very dynamic environment PREFERRED QUALIFICATIONS Logistics background and lean/six sigma training is a plus Proficient in SQL Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace accommodation or adjustment during the application and hiring process, including support for the interview or onboarding process, please visit https://amazon.jobs/content/en/how-we-hire/accommodations for more information. If the country/region you’re applying in isn’t listed, please contact your Recruiting Partner.

Posted 1 day ago

Apply

20.0 years

0 Lacs

India

On-site

Description Over the past 20 years, Amazon has reinvented on behalf of customers and has become the largest internet retailer and marketplace in the world. NOC (Network Operation Center) is the central command and control center for ‘Transportation Execution’ across the Amazon's transportation network. It ensures hassle free, timely pick-up and delivery of freight from vendors to Amazon fulfillment centers (FC) and from Amazon FCs to carrier hubs. In case of any exceptions, NOC steps in to resolve the issue and keeps all the stakeholders informed on the proceedings. Along with this tactical problem solving, NOC is also charged with understanding trends in network exceptions and then automating processes or proposing process changes to streamline operations. This second aspect involves network monitoring and significant analysis of network data. Overall, NOC plays a critical role in ensuring the smooth functioning of Amazon transportation and thereby has a direct impact on Amazon’s ability to serve its customers on time. Overall, NOC plays a critical role in ensuring the smooth functioning of Amazon transportation and thereby has a direct impact on Amazon’s ability to serve its customers on time. Within NOC’s umbrella, resides a specific arm which manages Inbound scheduling, MFI (Missing from inbound), 3P pickups, Vendor returns and invoice scanning operations across India (IN), AMET (South Africa, UAE, KSA, EGY and Turkey), Australia (AU), Japan (JP), Singapore (SG), Brazil (BR) and Mexico (MX). Purview of a Transportation Specialist A Transportation Specialist inbound at NOC facilitates flow of information between different stakeholders (Vendors/Sellers/Inbound Supply chain/category managers/Fulfillment centers) and resolves any potential issues that impacts vendor/seller experience and business continuity. Transportation Specialist at NOC works on Inbound operations which deals with appointment scheduling at Fulfillment centers requested by Vendors/sellers/carriers, ensuring that the truck reaches the FC for shipment delivery from vendors/sellers as per schedule. Transportation specialist on Inbound addresses any potential issues occurring during the lifecycle of freight placement and freight unloading at FCs. A Transportation Specialist provides timely resolution to the issue hand in hand by researching and querying internal tools and by taking real-time decisions. An ideal candidate should be able to understand the requirements/be able to analyze data and notice trends and be able to drive vendor/seller Experience without compromising on time. The candidate should have a basic understanding of Logistics and should be able to communicate clearly in the written and oral form. Transportation Specialist should be able to ideate process improvements and should have the zeal to drive them to conclusion. Key job responsibilities Responsibilities Include, But Are Not Limited To Responsibilities include, but are not limited to: Communication with external customers (Carriers, Vendors/Suppliers) and internal customers (Business, Planning, Fulfillment Centers etc) for freight scheduling/delays in arrivals/delays in unloading at FC or any other disruptions in the transportation network. Ability to pull data from Amazon tools to perform reporting and analysis thereby providing visibility to the leaders and stakeholders Develop and/or understand performance metrics (ex: capacity utilization at Amazon FCs) to assist with driving business results. Must be able to quickly understand the business impact of the trends and make decisions that make sense based on available data. Must be able to systematically escalate problems or variance in the information and data to the relevant owners and teams and follow through on the resolutions to ensure they are delivered. Might be required to work a flexible schedule/shift/work area, including weekends, nights, and/or holidays as per business. Providing real-time vendor/seller experience by working in a fast-paced operating environment. Basic Qualifications Bachelor's degree in a quantitative/technical field such as computer science, engineering, statistics Experience with Excel Experience with SQL Preferred Qualifications -Bachelor's degree in a quantitative/technical field such as computer science, engineering, statistics - Experience with Excel - Experience with SQL Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace accommodation or adjustment during the application and hiring process, including support for the interview or onboarding process, please visit https://amazon.jobs/content/en/how-we-hire/accommodations for more information. If the country/region you’re applying in isn’t listed, please contact your Recruiting Partner. Company - ASSPL - Karnataka Job ID: A3031449

Posted 1 day ago

Apply

2.0 years

0 Lacs

Hyderābād

On-site

DESCRIPTION Are you passionate about automation, designing processes, simplifying work and launching innovative products using technology? We are looking for Process engineers who have the ability to deep dive into processes and invent and simplify with a high degree of ownership. As a Process Engineer, you will be responsible for analyzing operational processes to design, develop, test, launch and continuously improve high quality self-service software products called Paramount workflows. You will work with the Selling Partner Identity Verification (SPIV) Organization to understand their business models and generate technical requirements supported by program technology. You will work cross-functionally with operations, product managers, software engineers, business analysts, data scientists and program managers on medium to large scale projects. In addition, you will develop ownership of process engineering processes to improve the product development lifecycle of Paramount workflows. Key job responsibilities Engage with operations, product, development, and program stakeholders to document requirements, create functional specifications and generate process maps. Design, develop, test, launch and improve self-service software products such as Paramount workflows. Operate as a Subject Matter Expert on Amazon’s internal authoring application and workflow engine to develop Paramount workflows. Engage with software development teams to understand and guide evolving program technology Understand and leverage Amazon technology and services Deep dive technical product or operational issues to propose and implement simple and effective solutions Develop efficient solutions through low and medium complexity code implementations, either by integrating existing APIs or creating new APIs to harness the capabilities of Large Language Models (LLMs). You persistently drive others to discover and resolve root cause when needed. About the team Selling Partner Identity Verification (SPIV) organization is focused on understanding and verifying exactly who we are doing business with (both vendors and sellers) and applying the right verification processes at every stage of their lifecycle. This includes identifying when/where identity changes take place (e.g., dormancy/reactivation, ownership changes, etc.) and re-verifying as needed, understanding which identities/entities are related to each other, and determining who we don’t want to do business with or where we have risk. Given the importance of registration as our starting point to understand who Selling Partners are and who is operating the account, this team also owns the registration seller experience and policies We design and implement policies, tools and technology innovations to protect the buying experience on Amazon while minimizing friction for sellers. We are looking for a Process Engineer with a passion for technology, innovation with analytical and communication skills. You will enjoy working with technology, and the ability to see your insights drive the creation of real tools and features for our operations teams, thereby, impacting customer experience and seller experience of merchants participating in our Marketplace on a regular basis. You will collaborate with Software Engineering, Data Science, Product Management, Program Management and Operations Teams to build a deeper understanding of operational performance and drive improvements which directly influence Amazon’s bottom-line. BASIC QUALIFICATIONS 2+ years of software development, or 2+ years of technical support experience Experience scripting in Python or Javascript Experience troubleshooting and debugging technical systems Experience with SQL databases (querying and analyzing) PREFERRED QUALIFICATIONS Experience with AWS, networks and operating systems Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace accommodation or adjustment during the application and hiring process, including support for the interview or onboarding process, please visit https://amazon.jobs/content/en/how-we-hire/accommodations for more information. If the country/region you’re applying in isn’t listed, please contact your Recruiting Partner.

Posted 1 day ago

Apply

3.0 years

0 Lacs

Hyderābād

On-site

DESCRIPTION NOC (Network Operations Center) is the central command and control center for ‘Transportation Execution’ across the Amazon Supply Chain network supporting multiple geographies like NA, India and EU. It ensures hassle free, timely pick-up and delivery of freight from vendors to Amazon fulfillment centers (FC) and from Amazon FCs to carrier hubs. In case of any exceptions, NOC steps in to resolve the issue and keeps all the stakeholders informed on the proceedings. Along with this tactical problem solving NOC is also charged with understanding trends in network exceptions and then automating processes or proposing process changes to streamline operations. This second aspect involves network monitoring and significant analysis of network data. Overall, NOC plays a critical role in ensuring the smooth functioning of Amazon transportation and thereby has a direct impact on Amazon’s ability to serve its customers on time. Within NOC’s umbrella, resides a fast-growing Last Mile support function – AMZL CO (Amazon Logistics Central Operations). AMZL CO is a team focused on driving higher quality at lower cost through standard work leveraging central management of the network. Central Operations (CO) supports daily planning and execution functions that impact Delivery Station (DS) operations across the AMZL and EDSP/XPT network. CO aims to bring efficiencies to processes through standardization, programmatic interventions and automations that improve planning, scheduling and routing efficiencies, reduce cost and free up time for station operators to focus on operational work. We cover the following functional areas with global parity: (i) Central Allocation - removes operator judgement on channel allocation by planning via O-TREAT (4 week to 1 week ahead) & 24 hour forecasting based D-1 capacity adjustments, (ii) Centralized Routing and Scheduling (CRS) – executes block scheduling (1 week ahead, D-1 block release) and route planning (D-day) of on-road capacity centrally, (iii) CO Systems Management (COSM) - performs station jurisdiction and sector configurations via JAS (Jurisdiction Authority Service), and handles sort & route planning configurations, (iv) Driver Support (CO DS) – aims to streamline the delivery process for DSPs and drivers by coordinating rescues through global tools - Rescue Planner (RP) & Mission Control (MC) and, (v) providing channel support for DSP, Flex and Hub DP along with account and payment management – WST entry validation, invoicing and weather incentives. CO team embarked on the journey of becoming operations execution partner of NA and EU COs in Jun’21 with an immediate objective of leveraging people cost benefits through targeted offshoring and in the long term, standardizing AMZL CO processes and technology in NA and EU and RoW (Rest of World) countries to establish worldwide parity, providing a platform for knowledge sharing and building a hybrid structure for local innovation and speed to market while optimizing gearing ratios and cost structures. We named the broader program MARCOPOLO. Marcopolo Vision: NOC’s vision is to build a global Center of Excellence by being the prime provider of Last Mile Central Operations (CO) execution services to NA, EU and RoW marketplaces in next 3 years. This org will - 1) provide 24x7 coverage to all geographies, 2) leverage centralization at scale to optimize HC through improved Operator Utilization by unlocking synergies across time zones, 3) ensure at par or better SLA and quality by closely monitoring audit performance, 4) enable operational parity and standardization across workstreams and geographies, 5) leverage in-house automation team to automate manual execution, 6) work closely with in-country program and operations teams to provide inputs on large scale process improvement programs including hands-off-the-wheel automations, 7) support global expansion and standardization, leverage learnings and best practices across geographies and 8) facilitate joint OP request submission exercises to product and tech teams by incorporating use cases across geographies. Purview of a Trans Ops Specialist A Trans Ops Specialist at NOC facilitates flow of information between different stakeholders (Trans Carriers/Hubs/Warehouses) and resolves any potential issues that impacts customer experience and business continuity. Trans Ops Specialist at NOC works across two verticals – Inbound and Outbound operations. Inbound Operations deals with Vendor/Carrier/FC relationship, ensuring that the freight is picked-up on time and is delivered at FC as per the given appointment. Trans Ops Specialist on Inbound addresses any potential issues occurring during the lifecycle of pick-up to delivery. Outbound Operations deals with FC/Carrier/Carrier Hub relationship, ensuring that the truck leaves the FC in order to delivery customer orders as per promise. Trans Ops Specialist on Outbound addresses any potential issues occurring during the lifecycle of freight leaving the FC and reaching customer premises. A Trans Ops Specialist provides timely resolution to the issue in hand by researching and querying internal tools and by taking real-time decisions. An ideal candidate should be able to understand the requirements/be able to analyze data and notice trends and be able to drive Customer Experience without compromising on time. The candidate should have the basic understanding of Logistics and should be able to communicate clearly in the written and oral form. Trans Ops Specialist should be able to ideate process improvements and should have the zeal to drive them to conclusion. We are open to hiring candidates to work out of Hyderabad and willing to come to office all 5 working days of the week Key job responsibilities Communication with external customers (Carriers, Vendors/Suppliers) and internal customers (Retail, Finance, Software Support, Fulfillment Centers) Ability to pull data from numerous databases (using Excel, Access, SQL and/or other data management systems) and to perform ad hoc reporting and analysis as needed is a plus. Develop and/or understand performance metrics to assist with driving business results. Ability to scope out business and functional requirements for the Amazon technology teams who create and enhance the software systems and tools are used by NOC. Must be able to quickly understand the business impact of the trends and make decisions that make sense based on available data. Must be able to systematically escalate problems or variance in the information and data to the relevant owners and teams and follow through on the resolutions to ensure they are delivered. Work within various time constraints to meet critical business needs, while measuring and identifying activities performed. Excellent communication, both verbal and written as one may be required to create a narrative outlining weekly findings and the variances to goals, and present these finding in a review forum. Providing real-time customer experience by working in 24*7 operating environment. About the team NOC (Network Operation Center) is the central command and control center for ‘Transportation Execution’ across the Amazon Supply Chain network. It ensures hassle free, timely pick-up and delivery of freight from vendors to Amazon fulfillment centers (FC) and from Amazon FCs to carrier hubs. In case of any exceptions, NOC steps in to resolve the issue and keeps all the stakeholders informed on the proceedings. Along with this tactical problem solving, we understand trends in network exceptions and automate processes or proposing process changes to streamline operations involving network monitoring and significant analysis of network data. Key job responsibilities Communication with external customers (Carriers, Vendors/Suppliers) and internal customers (Retail, Finance, Software Support, Fulfillment Centers) Ability to pull data from numerous databases (using Excel, Access, SQL and/or other data management systems) and to perform ad hoc reporting and analysis as needed is a plus. Develop and/or understand performance metrics to assist with driving business results. Ability to scope out business and functional requirements for the Amazon technology teams who create and enhance the software systems and tools are used by NOC. Must be able to quickly understand the business impact of the trends and make decisions that make sense based on available data. Must be able to systematically escalate problems or variance in the information and data to the relevant owners and teams and follow through on the resolutions to ensure they are delivered. Work within various time constraints to meet critical business needs, while measuring and identifying activities performed. Excellent communication, both verbal and written as one may be required to create a narrative outlining weekly findings and the variances to goals, and present these finding in a review forum. Providing real-time customer experience by working in 24*7 operating environment. About the team NOC (Network Operation Center) is the central command and control center for ‘Transportation Execution’ across the Amazon Supply Chain network. It ensures hassle free, timely pick-up and delivery of freight from vendors to Amazon fulfillment centers (FC) and from Amazon FCs to carrier hubs. In case of any exceptions, NOC steps in to resolve the issue and keeps all the stakeholders informed on the proceedings. Along with this tactical problem solving, we understand trends in network exceptions and automate processes or proposing process changes to streamline operations involving network monitoring and significant analysis of network data. BASIC QUALIFICATIONS Bachelor's degree in a quantitative/technical field such as computer science, engineering, statistics PREFERRED QUALIFICATIONS Experience with Excel Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace accommodation or adjustment during the application and hiring process, including support for the interview or onboarding process, please visit https://amazon.jobs/content/en/how-we-hire/accommodations for more information. If the country/region you’re applying in isn’t listed, please contact your Recruiting Partner.

Posted 1 day ago

Apply

0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

At EY, you’ll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture and technology to become the best version of you. And we’re counting on your unique voice and perspective to help EY become even better, too. Join us and build an exceptional experience for yourself, and a better working world for all. At EY, you’ll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture and technology to become the best version of you. And we’re counting on your unique voice and perspective to help EY become even better, too. Join us and build an exceptional experience for yourself, and a better working world for all. Fueled by strategic investment in technology and innovation, Client Technology seeks to drive growth opportunities and solve complex business problems for our clients through building a robust platform for business and powerful product engine that are integral to innovation at scale. You will work with technologists and business specialists, blending EY’s deep industry knowledge and innovative ideas with our platforms, capabilities, and technical expertise. As a champion for change and growth, you will be at the forefront of integrating emerging technologies from AI to Data Analytics into every corner of what we do at EY. That means more growth for you, exciting learning opportunities, career choices, and the chance to make a real impact. The opportunity: We are looking for a highly experienced Power BI developer who will be part of the Data Engineering tam within EY Client Technology’s Advanced Analytics Team. The candidate will be responsible for designing, developing and maintaining Power BI data models, reports and dashboards. If you are passionate about business intelligence, analytics and have a knack for turning complex data into actionable insights, we want to hear from you. To qualify for the role, you must have: Strong proficiency in Power BI, including DAX and the Power Query formula language (M-language). Advanced understanding of data modeling, data warehousing and ETL techniques. Designed, developed and maintained Power BI reports (including paginated reports) and dashboards to support business decision-making processes. Designed, developed and implemented Power BI data models for complex and large-scale enterprise environments. Proven experience with deploying and optimizing large datasets. Proficiency in SQL and other data querying languages. Strong collaboration, analytical, interpersonal and communication abilities. Ideally, you’ll also have: Bachelor's degree in Computer Science, Information Technology, Data Science, or a related field. Microsoft Power BI certification. Experience with other BI tools. Worked within large teams to successfully implement Power BI solutions. Sound knowledge of the software development lifecycle and experience with Git. Ability to propose solutions by recalling best practices learned from Microsoft documentation, whitepapers and community publications. What we look for: We want people who are self-starters, who can take initiative and get things done. If you can think critically and creatively to solve problems, you will excel. You should be comfortable working with culturally diverse outsourced on/offshore team members which means you may need to work outside of the normal working hours in your time zone to partner with other Client Technology staff globally. Some travel may also be required, both domestic and international. What we offer: As part of this role, you'll work in a highly integrated, global team with the opportunity and tools to grow, develop and drive your career forward. Here, you can combine global opportunity with flexible working. The EY benefits package goes above and beyond too, focusing on your physical, emotional, financial and social well-being. Your recruiter can talk to you about the benefits available in your country. Here’s a snapshot of what we offer: Continuous learning: You’ll develop the mindset and skills to navigate whatever comes next. Success as defined by you: We’ll provide the tools and flexibility, so you can make a meaningful impact, your way. Transformative leadership: We’ll give you the insights, coaching and confidence to be the leader the world needs. Diverse and inclusive culture: You’ll be embraced for who you are and empowered to use your voice to help others find theirs. EY | Building a better working world EY exists to build a better working world, helping to create long-term value for clients, people and society and build trust in the capital markets. Enabled by data and technology, diverse EY teams in over 150 countries provide trust through assurance and help clients grow, transform and operate. Working across assurance, consulting, law, strategy, tax and transactions, EY teams ask better questions to find new answers for the complex issues facing our world today.

Posted 1 day ago

Apply

20.0 years

0 Lacs

India

On-site

Description Over the past 20 years, Amazon has reinvented on behalf of customers and has become the largest internet retailer and marketplace in the world. NOC (Network Operation Center) is the central command and control center for ‘Transportation Execution’ across the Amazon's transportation network. It ensures hassle free, timely pick-up and delivery of freight from vendors to Amazon fulfillment centers (FC) and from Amazon FCs to carrier hubs. In case of any exceptions, NOC steps in to resolve the issue and keeps all the stakeholders informed on the proceedings. Along with this tactical problem solving, NOC is also charged with understanding trends in network exceptions and then automating processes or proposing process changes to streamline operations. This second aspect involves network monitoring and significant analysis of network data. Overall, NOC plays a critical role in ensuring the smooth functioning of Amazon transportation and thereby has a direct impact on Amazon’s ability to serve its customers on time. Overall, NOC plays a critical role in ensuring the smooth functioning of Amazon transportation and thereby has a direct impact on Amazon’s ability to serve its customers on time. Within NOC’s umbrella, resides a specific arm which manages Inbound scheduling, MFI (Missing from inbound), 3P pickups, Vendor returns and invoice scanning operations across India (IN), AMET (South Africa, UAE, KSA, EGY and Turkey), Australia (AU), Japan (JP), Singapore (SG), Brazil (BR) and Mexico (MX). Purview of a Transportation Specialist A Transportation Specialist inbound at NOC facilitates flow of information between different stakeholders (Vendors/Sellers/Inbound Supply chain/category managers/Fulfillment centers) and resolves any potential issues that impacts vendor/seller experience and business continuity. Transportation Specialist at NOC works on Inbound operations which deals with appointment scheduling at Fulfillment centers requested by Vendors/sellers/carriers, ensuring that the truck reaches the FC for shipment delivery from vendors/sellers as per schedule. Transportation specialist on Inbound addresses any potential issues occurring during the lifecycle of freight placement and freight unloading at FCs. A Transportation Specialist provides timely resolution to the issue hand in hand by researching and querying internal tools and by taking real-time decisions. An ideal candidate should be able to understand the requirements/be able to analyze data and notice trends and be able to drive vendor/seller Experience without compromising on time. The candidate should have a basic understanding of Logistics and should be able to communicate clearly in the written and oral form. Transportation Specialist should be able to ideate process improvements and should have the zeal to drive them to conclusion. Key job responsibilities A Transportation Representative at NOC facilitates flow of information between different stakeholders (Warehouses/Category teams/Carriers) and resolves any potential issues that impact customer experience and business continuity. Transportation Representative at NOC works across Inbound operations. Inbound Operations deals with Vendor/Carrier/FC relationship, to plan the freight for delivery in warehouses as per given appointment time. Transportation Representative on Inbound addresses any potential issues occurring during the lifecycle of forecasting to actual delivery of the appointment. Key job responsibilities A Transportation Representative provides timely resolution to the issue in hand by researching and querying internal tools and by taking real-time decisions. An ideal candidate should be able to understand the requirements/be able to analyze data and notice trends and be able to drive Customer Experience without compromising on time. The candidate should have the basic understanding of Logistics and should be able to communicate clearly in the written and verbal form. About The Team NOC Inbound Team manages and owns the end to end execution of vendor’s/seller’s shipment inbounding process. This includes Appointment scheduling & prioritization, Appt sidelining and rescue, Appt modification etc and other related process/tasks across IN and ECCF countries. Basic Qualifications Graduation in any specialization from a recognized university. Excellent communication skills (written and verbal) in English language. Ability to communicate correctly and clearly with all customers Good comprehension skills – ability to clearly understand and state the issues customers present. Ability to concentrate – follow customers issues without distraction to resolution. Work successfully in a team environment as well as independently. Familiarity with Windows, Microsoft Outlook, Microsoft Word, internet browser and Excellent typing skills. Demonstrates an ability to successfully navigate websites. Demonstrates a proficient knowledge of email applications Preferred Qualifications [July 1, 2025, 11:34 AM] Sharma, Vinay: Graduation in any specialization from a recognized university. Excellent communication skills (written and verbal) in English language. Ability to communicate correctly and clearly with all customers. Good comprehension skills. Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace accommodation or adjustment during the application and hiring process, including support for the interview or onboarding process, please visit https://amazon.jobs/content/en/how-we-hire/accommodations for more information. If the country/region you’re applying in isn’t listed, please contact your Recruiting Partner. Company - ASSPL - Karnataka Job ID: A3031448

Posted 1 day ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies